An attempt at explaining deontology

I wrote this mostly as an exercise in exploring an alternative system of ethics outside of utilitarianism. I found this laborious, and Kant’s writing in general quite difficult to digest. I hope this makes some sense and can serve as somewhat of a framework to approach reading The Metaphysics of Morals (n.b. the outline here is taken from chapter 1 of Kant’s Groundwork for the Metaphysics of Morals, I didn’t read the entirety of his work, just enough to try and understand the Categorical Imperative). I also tried my hand at a criticism of the theory from a utilitarian point of view toward the end.


Immanuel Kant was born and educated in 17th century Prussia and was one of the foremost thinkers at the time of the Enlightenment. Kant’s ethics, known as Kantian deontology, seeks to situate morality in the realm of pure reason, drawing on a priori knowledge and principles of logic to define a supreme principle of morality, from which duties can be drawn that in turn shape the maxims by which individuals act in the world. This supreme principle is dubbed the Categorical Imperative (CI), with Categorical referring to unconditional applicability, and Imperative referring to its obligatory nature. The CI rests on the necessity of the Good Will and allows moral agents to deduce what ought to be done.

Kant contextualises the discussion toward formulating his ethics in the Preface of Groundwork for the Metaphysic of Morals[i]. Ethics is localised within the broader field of philosophy along with natural science, and logic, which can further be divided along the line of empirical and pure philosophy. Kant argues that a moral law may only be found in a priori knowledge, cleansed of any empirical or experiential understanding. Only a law derived from these foundations may be considered necessary, and universalisable. Kant see’s the formulation of such a law as inherently necessary to inform our actions in the field of practical anthropology.

It is acknowledged that the context into which his Groundwork is being placed is one in which the distinction between pure rational and empirical thought has not been sharply drawn. There is direct comparison made to the work of Wolff, which Kant criticises for not drawing attention to the divide between pure and empirical approaches. Within the broader context of Enlightenment thinking in the early 17th Century, Kant’s unique blend of empirical and rational thought stands as distinctive. The claim is quite strongly held that morality must be based only in the realm of pure philosophy.

The driving force behind developing a metaphysics of morals is twofold to Kant. In the first instance, as rational beings it is of interest intellectually to develop such a theory. More importantly, it is required for rational beings to avoid moral corruption. The arguments made toward establishing the broader moral theory begin with the concept of the Good Will. Kant argues “Nothing in the world can possibly be conceived that could be called ‘good’ without qualification except a good will”1. This is in contrast with other virtues, which may be carried out virtuously but with aversive intent (e.g. loyalty to a villain), leaving the Good Will alone as a virtue which is good in and of itself. The Good Will’s value does not lie in any consequential ends it aims to achieve, but rather because of “how it wills”1. This protects individuals from circumstantial reasons where good consequences may not be achieved, and places moral worth outside of the ends of actions.

As rational beings, Kant argues that our practical faculties of reason are poorly suited for merely achieving the goal of happiness. Kant draws the distinction between reason and instinct, arguing that reason has a “very poor arrangement”1 for carrying out the purpose of achieving happiness, stating this would be better achieved by instinct. Instead, reason is meant to shape our will in the pursuit of a will that is good in itself. This is not to say that happiness is an undesirable end, or that it cannot be obtained, but that reason should instead strive to develop our will, and that perhaps as a second order consequence, a level of happiness or contentment may be achieved.

From the Good Will, the concept of duty emerges. Given its basis on reason, a Good Will at times will be obliged to act in a way that may “get in the way of things that the person merely prefers”1, and thus will need run counter to the faculty of instinct or preference. When an act is carried out (based on a maxim that is in accordance with moral law) despite it not being aligned with individual preference, this could be said to be an act performed from duty. This is distinguished from an act performed in accordance with duty, which are acts performed in conformity to individual preference and so happen to align with what would be moral. Whilst actions performed in accordance with duty may be commendable, they’re not deserving of the “high esteem” that acts done from duty achieve. Kant explicitly states:

“For an action to have genuine moral worth it must be done from duty”1

This is a nuanced point that Kant recognises is difficult to assess from an outside perspective. For instance, a shopkeeper who gives a customer the correct amount of change may do so against their individual preferences to keep the money, as duty requires this of them. However, out of fear for not being caught, or to encourage the customer to return to her shop, the shop keeper may perform the exact same action. To an outside observe these circumstances are indistinguishable, however only one is of genuine moral worth.

The concept of duty is of central importance to Kant’s theory, and allows him to develop the basis for a supreme premise of morality. The idea is developed that moral relevance of actions do not lie in their consequences or even in the acts themselves, but rather in the maxim which is involved, that is to say the generalised principle on which the will acts. The maxim upon which the will acts is seen as it’s a priori principle, and the material consideration of any particular situation (time, person, place) being the a posteriori driver of the use of the maxim. Thus, something done from duty, is something done according to formal principles of reason that determine the will, free from empirical considerations. Kant continues, because our duty stems from purely rational considerations, when we act from duty, we act in respect to a moral law that drives our actions, preferences are overpowered by respect for the law. The notion of the autonomy of the individual is applicable in-so-far as for the moral agent, her will should be influenced by respect for the maxim “I am to follow this law even if it thwarts all my desires”1.

Thus, there exists some moral law in which the will must adhere to for it to be the Good Will. It is from here, a place in which this fact alone represents what is required to constitute the Good Will, that Kant deduces the CI, as follows:

“nothing remains to serve as a principle of the will except conduct’s universally conforming to law as such. That is, I ought never to act in such a way that I couldn’t also will that the maxim on which I act should be a universal law”1

For Kant, this principle must serve as the guide for the Good Will for duty to remain legitimate. To judge the moral value of our actions, we must judge the maxims upon which we act against the CI. If the maxim upon which we act is incongruent with universilisability, then the action cannot be considered a moral one.

Kant distinguishes the CI from Hypothetical Imperatives, which are contingent on the ends one seeks to achieve rather than being deduced from duty. For instance, if one wants to get good grades, one ought to study. These Hypothetical Imperatives are contingent on circumstance, whereas the CI holds across all situations, and has the characteristic of universilisability.

The above constitutes Kant’s general formulation of the categorical imperative. Throughout his work he offers four separate formulations that appeal to the various elements of human rationality, but all appeal to the same rationality grounded in pure philosophy. For instance Kant’s second formulation focus on the inherent dignity of persons, stating:

“Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.”1

Additional formulations of the categorical imperative focus on autonomy or universilisability, and each appeal to the supreme principle of morality.

Kantian deontology distinguishes itself from other high moral theories by focussing not on the consequences of actions, but on the motives, or more explicitly the maxims behind them. John Stuart Mill offered clear criticism to deontological methods as a key figure in the development of utilitarianism, the most well-known of the consequentialist moral theories. Mill argues that Kant’s deontology essentially must reduce to consequentialist ‘trade-offs’ for it to hold at all, and therefore is no more than an elaboration from a base consequentialist theory[ii]. The principle of an action only holding moral worth if it’s maxim can be willed to be a universal law requires reason to identify the circumstances under which these maxims would become contradictory. Kant famously uses the example of lying to show the process. He concludes we could not will a universal law to lie, stating:

“It would be futile to offer stories about my future conduct to people who wouldn’t believe me; or if they carelessly did believe me and were taken in, would pay me back in my own coin.”1

Mill’s response to this claim may be that in essence what we’re concerned about is the issues that arise because of universilising this maxim are their consequences, that our words no longer hold any weight or value, and as a consequence we can no longer effectively communicate with others. As a result, when viewing morality from a deontological point of view, we’re simply utilising the principles of utilitarianism in a more roundabout fashion. This would amount to deontology forming a type of rule utilitarianism wherein we utilise the CI to form rules that aim to benefit individuals and society and avoid harmful outcomes.

On the surface, this is a compelling response to Kantian deontology. However, approaching the CI as a reframing of rule utilitarianism seems to miss an essential concept of Kant’s moral theory. Throughout Groundwork for a Metaphysics of Morals, Kant acknowledges consequences and that they can have importance, but they ought not to be the lynch in the chain of action we focus on for moral worth, stating “we ought to say that act performed in accordance with duty deserve praise and encouragement but don’t deserve high esteem.1 He argues that maxims presuppose acts, which presuppose the consequences of those acts, so to look at only the consequences as a measure of moral worth, we would be judging something different to the morality of the act. Consequences therefore exist but are not the basis for the moral value of actions.

Kant’s deontology has remained highly influential as a major competing school of thought amongst high moral theories. It offers a key approach to morality that lies beyond the constraints of an actions consequences and seeks to be derived from a place of pure human reasoning. Kant rests his theory on the key concepts of the Good Will, the duty which is derived from it, and the subsequent universal maxim of the categorical imperative. Although criticism has aimed at the methods of deontology, it remains an internally coherent and plausible approach to morality.


[i] Kant I. Grounding for the Metaphysics of Morals : On a Supposed Right to Lie Because of Philanthropic Concerns. Cambridge, MA: Hackett Publishing Company, Inc.; 1993.

[ii] Loizides A. Mill’s A System of Logic: Critical Appraisals: Routledge; 2017.

“Why did you come here?”

I am a few weeks into working in a small hospital in Kamanga, in the Mwanza region, Tanzania. The town is small, with approximately 4000 people, most homes do not have electricity or running water. There is limited available food, and few things to do. The local people here live a tough life, with the main means of living being subsistence farming, and diseases of poor hygiene being extremely common. The lakes region of Tanzania has a particularly high incidence of malaria, with children suffering multiple infections throughout their childhoods. In short, it is a tough place to live. So in a usual conversation, after explaining where I am from to one of the locals, I am inevitably asked “why did I come here”, and I find it a difficult question to answer. 

In Australia, when asked why I wanted to go to Africa for a number of months I found the question simple and easy to answer. Because I want to work in global health. That is a straight forward enough answer for most people, they give an interested half-nod, and move the discussion forward to other topics. Most people had never more than given a passing thought to the issues beyond the walls of the hospital within which these conversations were had, so it would be easy enough to move the conversation to different topics without to much inquisition into my underlying motives. In fact, if I am being frank, the amount of serious thought I had actually given to the question of why seems vanishingly small in comparison to thoughts of logistical planning around visas, accommodation, vaccines and flights. However, when the question is posed from someone who is living under these conditions not out of choice, but out of necessity, by someone who if given the chance would probably jump at a life in a country like Australia, the response seems to require a solid and coherent justification. 

Giving the answer “because I want to work in global health” seems as though I am being tokenistic of their entire life. Because I want to do this thing, work in global health, I have come from my life of relative affluence and comfort to gain some experiences from you, and then leave. At least that is how it feels when I give the response. This is normally met with some range of confused responses. “What does that mean, global health?” is a common follow up. I find this incredibly ironic, that in Australia to say one wants to work in “global health” is the same as to say one wants to work in “electrophysiology” or “colorectal surgery”. The term is balled up and thrown around like some kind of object which people can grab onto and claim for themselves. I want to work on global health. But when asked by a nurse or doctor working in the health center in a Tanzanian village, who I would suggest are the people really working in “global health”, the illusion somewhat falls to pieces. What is global health? Why the hell do I want to work on it? And more importantly, who qualified me to be the one to come here and ‘work’ on it in the first place?

The first question, of what is global health, is one worth resting on for a moment. When we talk about global health, we’re generally talking about addressing inequity in health outcomes across the globe, without regard for borders or nationality. This remains incredibly broad and encompasses a variety of cause areas. Some common examples may be infectious disease control, maternal and child health, or access to clean water, nutrition and sanitation. The methodologies used to work in these cause areas are also vast. From lab based science developing vaccinations against malaria, to on the ground humanitarian aid, there is a variety of ways any one individual can use their time and efforts to contribute to the cause areas in question. So I guess my response is I want to work on the problems that disproportionately impact the disadvantaged. Maybe part of the reason it feels a bit ‘off’ giving this response to a local is that by answering in this way, you’re highlighting the line of ‘have’ and ‘have-not’ that exists between the two of you. 

The next question of why I want to do it is also not obviously intuitive. From a moral standpoint, I believe that as one of the lucky few fortunate enough to be born into an affluent society, we have somewhat of a duty to use our time spent working to attempt to somewhat benefit those who through no fault of their own have not been so fortunate. But I’m certainly not in any unique position to do so as a PGY3 resident with no particularly outstanding skills. Possibly, the advice I give to the patient in front of me has some benefit on the margin to their wellbeing. But in terms of the alleviation of suffering at a broader scale, my contribution is less than a drop in the ocean. Perhaps the benefit is in learning the ropes, trying to grasp a more nuanced understanding of what and why people lead more difficult lives in a place so far from home, which may place me in a stronger position to contribute in the future. Actually coming here to Tanzania has taken a significant amount of time, effort and money. If I really wanted to change the world, might I have been better off simply working locum shifts at a high hourly rate, and donating the difference to the Against Malaria Foundation? Surely at least this might have given some guarantee to the impact my efforts would have. Is it justifiable at all to even be here, given the concrete potential impact I could have had through this alternate route?

On an intuitive level, the answer seems to be that on a personal level it ‘feels’ as though I am doing something, by actually coming here. This however is a weak argument from an impact point of view. What about the fact that expending this time, effort and money somewhat legitimises my claims that I am interested in ‘global health’. I think this argument does carry a significant amount of weight. Efforts for advocacy and potentially getting others interested in these issues of moral importance would surely be more effective from someone who has witnessed the inequities first hand, versus one who has simply read about them from the comfort of their own lives. I think it also offers a platform from which to launch into more effective projects such as research or further direct work. These are at least the arguments I’ve convinced myself of, when attempting to internally justify the decision to come to Kamanga. 

So where does that leave me when I’m asked by one of the Tanzanian health workers why I’ve left Sydney to come to Kamanga. I find actually talking through this reasoning difficult and like some kind of mental gymnastics that I can’t seem to properly articulate. Usually, I end up saying some version of I’ve come here to learn about tropical diseases and to experience a different health system. Whilst this is true to some extent, it’s certainly not the driving force behind why I am actually here. I don’t know what the right answer is to this question, but if I keep my consequentialist hat on I don’t think it really matters. Although I do wonder, when I am on my 6th serving of rice and beans for the week, why did I come here. 

Satisfiscing vs maximising utilitarianism for the individual

Moral philosophy offer a number of high moral theories that aim to answer the question of what the right thing to do it, each with their own strengths and weaknesses. Utilitarianism is a high moral theory that tends to have strengths in problems at a societal level, and the cost-benefit principles are shared with much of Western economics. Developments in utilitarian thought by those such as Peter Singer have drawn on these maximising, agent neutral and consequentialist principles to formulate an approach for the individual that requires them morally to give to a point of marginal utility. The arguments presented for such an approach are sound and coherent, and in theory applicable to the individual in a way that may maximise the good in society overall. However, principles of decision theory state that an individual’s psychology functions in a much different way to the strictly logical, relying on heuristics or rules to make day to day decisions. This presents a problem for attempting to apply strictly logical principles to practical situations, as can be identified in the isolation of situations. Further, approaching life with the strictness required of a maximising utilitarian approach would mean that much of what is meaningful is left to be desired. I argue that for these reasons, an individual should take a satisficing rather than maximising approach to morality, and set rule and standards that allow them to remain altruistic to others in the world, and not remain at odds with their psychology or desires for a life inclusive of various experiences. 


An argument for a satisficing approach to pulling the drowning child from the pond

When considering what the ‘right thing to do’ is as a society, organisation, or individual, many factors come into play. Traditionally from a philosophical point of view, high moral theories have offered accounts of frameworks that can be used to make decisions. These date back to include Aristotle’s Virtue Theory, to Kant’s deontology, and consequentialist moral theories, including utilitarianism. On a more individual level, we often rely on intuitions and mental models to judge situations and make decisions, including those that may be considered moral or ethical. Decision theory is a field of science and philosophy that seeks to investigate the factors that drive an individual’s decisions, and is a rapidly expanding space of psychological research (1).

In this essay I will assess the application of utilitarianism to moral decision making on an individual level and will therefore no further discuss alternative moral theories. I will discuss the application of a utilitarian framework to day-to-day decisions an individual may face, and highlight the tensions and difficulties that may arise between intuitive decision making and utilitarian guidance. Then, I will discuss how the cost-benefit model is highly effective in making decisions at the level of society, but has a more limited utility in the case of the individual. I will conclude that while individuals should follow the utilitarian reasoning that we have an obligation to help others outside of our immediate vicinity, we should do so in a way as to not interfere with living a life that is allows for personal experiences, enjoyment and relationships.

Introduction

Utilitarianism falls under the umbrella of consequentialism, differentiating it from other high moral theories in that it is the consequences of actions that alone carry the moral weight of an action, intentions and motives are disregarded. John Stuart Mill, one of utilitarianism’s greatest modern proponents, outlines the aim of the theory as aiming for the greatest possible happiness, for the greatest possible number of sentient beings (2). Inherently, a requirement of the utilitarian approach is that of impartiality on the part of the individual. For an act to be judged as morally good, the perspective must be taken from the view of a ‘benevolent spectator’, utilitarianism is an agent-neutral moral theory. This is a particularly important point and relevant to our discussion in this paper, given that individuals often weigh the interests of those in proximity to themselves more heavily, and certainly put greater emphasis on the interests of themselves, than on the interests of others unknow to them.

Within utilitarianism the discussion of how one should go about achieving and measuring the outcome of ‘happiness’ is contested. There is disagreement as to whether acts and their likely results should be what is assessed from a moral point of view (act utilitarianism), or whether a set of rules which seeks to maximise ‘utility’ should be the focus (rule utilitarianism). From a societal level, issues such as whether we should be aiming to maximise the mean happiness experienced for all individuals (average utilitarianism), or the absolute amount of happiness experienced across the entire population (total utilitarianism) remain contested, and have led to the formation of a new area of study known as population ethics (3).

For the purposes of this paper, the distinction I’d like to draw greatest attention to is between that of maximising and satisficing versions of utilitarianism (4). Maximising utilitarianism states that an act is only permissible if it is the single act that most maximises a desired outcome, for instance if a maximising consequentialist were to aim to save as many lives as possible, the only moral thing to do would be the act that saved the most lives possible. The demandingness of maximising utilitarianism is quite clear. Satisficing utilitarianism argues that an act is morally permissible if it meets some minimum standard, and hence there may be multiple versions of moral action in any given circumstance. In the example of saving lives, there may exist an arbitrary number above which it would be considered morally acceptable to save.  

We have used ‘happiness’ as a term for the outcome utilitarianism seeks to achieve, however this too is a contentious issue that cannot so simply be dismissed. Since the conception of utilitarianism, Epicuris, Benthem, and Mill have all more or less equated utility with the presence of pleasure and absences of pain. Mill states in his 2000 work “actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness” (2). This is considered the basis of hedonistic utilitarianism. A differing approach, known as preference utilitarianism sees actions right in so far as they meet the preferences of agents being acted upon.

A maximising version of preference utilitarianism was made popular by Australian philosopher Peter Singer, in his landmark paper Famine, Affluence and Morality (5). In this paper Singer argues that given the abundance of famine and suffering in the world, those who live in relative affluence have a moral duty to give financial and other aid to those in greatest need, in the most effective way possible, to the point of marginal utility (i.e. to the point where giving any further would put one in a worse off position than those who are being given to). He concludes:

“It follows from what I have said earlier that we ought to give money away, rather than spend it on clothes which we do not need to keep us warm. To do so is not charitable, or generous. Nor is it the kind of act which philosophers and theologians have called “super-erogatory” – an act which it would be good to do, but not wrong not to do. On the contrary, we ought to give the money away, and it is wrong not to do so.”

There is an abundance of suffering, famine and other less-than-utile things in this world, enough to absorb (almost) any one individual’s resources. If we follow the arguments laid out by Singer to their conclusion we arrive at the point where we should be giving to the point of marginal utility, that is, to the point where giving any further would mean putting ourselves in a position of poverty. From this line of thought, a new branch of philosophy has emerged known as effective altruism, where the argument is made that ones career, lifestyle choices and donations should be aligned with our moral obligation to maximise the amount of good that we do in the world (6,7). On a large scale, a maximising approach such as this is useful and tends to allow for calculus and cost-benefit analyses. It treats individuals as equals and aims to maximise the good, for the most amount of people. However, when applied to an individual’s life, the obligation to do good in the world is placed above traditionally and intuitively held human values, including preferential treatment of friends or family, and requires one to take a practical and truly agent-neutral stance, valuing the pleasures and desires of themselves with that of others, across the globe. How well does this hold up for the individual, given the significant demandingness, and how much weight should it be given when making choices about our individual lives?

Individual Choices

If we accept the demandingness that is required from this version of utilitarianism, our day to day lives would appear very differently. Given the scale on which suffering exists in the world, it would obliterate all individual desires and motives and lead us to conclude that near all our resources should be directed away from ourselves and those around us to areas in which more good can be performed with them. This approach holds up well in economic, policy, or business decisions, allowing for cost-benefit analyses to implement the most effective interventions possible. In the area of global health, cost-benefit analyses allow us to contribute maximally to the good of the health of the population we’re seeking to intervene on from the set amount of resources available (8). In the setting of an organisation or business where the existence of the entity is purely to fulfill a desired outcome, the ethic of attempting to maximise this outcome by way of cost-benefit calculus seems relatively non-controversial. For the individual however, it is not so straight forward. For the purposes of this paper we will discuss a version of preference utilitarianism, that the individual’s primary purpose is in seeking to fulfil their preferences. We may even extend this to the point in which an individual seeking appreciation of the aesthetic, or of unique experience, is all in pursuit of the satisfaction of preferences. Given the level of demandingness maximising preference utilitarianism puts upon us, we would need to fully negate our individual preferences to work toward achieving what Singer describes as marginal utility. Singer states we should give to the point where “giving more would cause oneself and one’s dependants as much suffering as one would prevent in Bengal” (5). For most individuals this level of giving would necessitate giving up many of the pursuits that one see’s as satisfying their preferences.

Let us explore a few examples of relatively mundane and everyday scenarios that an individual may encounter, and how a maximising preference utilitarian approach may differ from that of ‘regular’ human decision making, then comment on whether we would see it reasonable to take one approach over the other. We will consider an individual, buying a new pair of shoes, buying a new couch, deciding on the logistics of catching up with a friend for dinner, and planning a holiday.

An individual buying a new pair of shoes may be doing so out of necessity, i.e. they don’t have a functional pair, or out of want, i.e. they would like an additional pair for some other reason than strictly functional. For the purposes of this example, let’s consider the latter, as the former could be fairly easily justified. If we adhere strictly to our utilitarian foundations, the shoes would at most bring us some mild level of happiness, for a short period of time, before blending into the bottom of our spare room wardrobe. This gain must be weighed against the value that the resources may have had if utilised elsewhere, for instance donated to a charity providing bed nets for those in a malaria endemic area. The evaluation that needs to be made here is relatively straight forward from an agent neutral standpoint, the potentially lifesaving use of the money is far more morally acceptable than the individual preference satisfying shoe purchase. Even on an individual level, it would not be too much for an altruistically inclined individual to realise the greater moral worth of the donation. This is not the ethos commonly expressed by our modern consumer society, but if we are accept at any level Singer’s arguments, it is a conclusion that would be quite obvious. On an individual level therefore, the negation of excessive individual consumer purchases in order to use resources in a more altruistic fashion seems like a reasonable conclusion, as guided by preference utilitarianism.

Now for the couch. Let us assume the individual in question has had their old couch for several years and it had subsequently been worn out, to the point of being unusable. Having a couch is by no means a necessary part of life, certainly much less so than clothes to keep you warm, or shoes to protect your feet. However, it could certainly be argued that having one, improves your quality of life to some extent. Most everyone living in a Western developed nation would be able to recount flopping onto the couch after a long day, or relaxing to watch a favourite movie, both experiences which doubtlessly constitute some level of wellbeing or happiness. The purchase of a couch therefore represents a non-essential, but beneficial purchase for the individual. The strict maximising utilitarian may conclude that this money is far better donated, as if used effectively would perhaps save an entire life, can we really argue that a couch is more important that a life? But this is perhaps not reasonable, as if we accept this reasoning for the case of the couch, won’t we reduce our couch-less individual to a life in which they’re eating canned beans on the floor of a cold run-down apartment so that they can be maximising their donations? Peter Singer himself would argue that yes, this is the correct conclusion (9). As an individual, is a life devoid of anything excessive whatsoever, one that we would really want to lead? Maybe it would therefore be more palatable to find some middle-ground. Perhaps it is reasonable to settle for a couch that meets our minimum requirements to offer the comforts and experiences associated with being a couch owner, but not so far as to be in any way excessive in terms of style, size or luxury.

What we may conclude to this point is that it seems as though it is quite reasonable to accept the maximising utilitarian arguments, to a certain extent. It is reasonable to us to go without excess, for instance an extra pair of shoes, or a particularly stylish couch, but we should not go without things that are necessary for us to partake in what would be contextually considered a ‘normal’ activity, for instance, enjoying a somewhat comfortable sit after a day’s work. This is contrary the accepted notion by maximising utilitarians, that we should parse life back to its core necessities. These of course have been commentaries on material goods and needs, what is to be said about the other aspects of our being?

You have been invited by a friend of many years for a dinner and catch up. Some time has passed since you have seen each other, and you always look forward to finding out what she has been up to, and filling her in on your own life. This time, she has chosen the location for your catch up, a restaurant somewhere in the city, you haven’t heard of it. The evening roles around and you pre-emptively have a glance over the menu to see what kind of food you’re in for, to find the shockingly high prices. This is not the kind of place you’d normally frequent, and the cost of the meal certainly appears to be well beyond what is required for basic sustenance. Your friend however seems very excited for the experience, and for the opportunity to share it with you. How should we balance these competing priorities? The perfect maximising utilitarian may argue that, yes, indeed we should value our friendship and the shared experiences it entails, but how necessary is it that we visit a restaurant this expensive? Shouldn’t we be able to have a similar shared experience for a lower price, and put the addition costs to more altruistic a cause, even if this means pulling the pin on your friends choice of restaurant at the last minute, and risking damaging the relationship. Isn’t the use of the money in the hands of those who truly need it, more important than the chance of a damaged friendship on your part?

The tension becomes clear here between what the maximising utilitarian view holds for the individual, and the nuisanced situational considerations that fall beyond the scope of a simple ‘cost-benefit’ analysis. Whist the cost certainly may be reduced in cancelling the dinner, and the time could still be spent together, there would be a loss to the quality of the evening on account of the planning and consideration the individual’s friend has put into it. Further, if we accept that we should take an approach that allows us to have an acceptable experience from the minimum cost, are we not obliged to spend considerable time and effort on this moral calculus to ensure that no time, effort or money is wasted? Does this mean there is but one moral situation we can arrive at, the point at which the curve of cost and benefit reaches its peak point of inflexion? Social interaction and quality relationships formulate much of what is valuable in life (10), to do away with the quality of these for the sakes of a benefit to others is to truly embody an agent neutral theory and maximise total wellbeing, with disregard to your own.

Maximising and the Individual

Utilitarianism, and more specifically the maximising version of it, offers an enticing approach to economic and public policy. For the utilitarian policy maker, we need to decide on decisions that allow maximum ‘utility’ for the maximum number of people. The commonly phrased cost-benefit analysis embodies the ethos of the approach and is widely accepted at a societal level. Utilitarianism is closely linked to the free market economics of many Western societies that aims to maximise production and minimise costs. This approach is not without criticism. Famously Sen criticised the utilitarian approach, specifically in using utility as the measure of success, arguing in favour of a more nuanced approach that viewed individuals in terms of the their functionality, and the capabilities they had to live a ‘good’ life (11). Nonetheless, the free market dominates much of the Western world and forms a structural basis for our societies.

It is not surprising then, that the same ethos that has allowed for the development of society over the past centuries is implicated on the individual. Maximising utilitarianism, especially the version touted by Peter Singer and many effective altruists, is certainly one version of this. One popularised and controversial idea identified with this group of consequentialists is that one should care equally or more for the strangers in the world as that of their own family (12), with the line of reasoning going something like the following: If we agree all individuals have an equal claim to life, we should not preference assisting those who just so happen to be in our vicinity, or who just so happen to share some portion of our genetic material, to do so would be arbitrary, against logical reasoning, and would violate our claim that all individuals have an equal claim to life. The agent neutrality required by this form of utilitarianism that acts as such a strength at a societal level, goes sharply counter to personal intuition and preference at a personal level. We are left in a position where we must decide whether the obligation to do what is right outweighs what is intuitive and ultimately satisfies our desire for wellbeing. As we have explored in the above examples, to take on the position of meeting the demandingness of maximising utilitarianism, would mean compromising on many of the parts of life that make it most worth living, relationships, experiences, and even the enjoyment of a comfortable couch.

Although the question of what we are trying to maximise with utilitarianism is controversial, we can grasp that it is within the realm of ‘happiness’, ‘wellbeing’ or ‘utility’. All of these end points are necessarily individual experiences, if we assume our conscious experience is individualistic. Thus, as individuals, there must be some level of responsibility we bear to ensure we ourselves experience happiness, wellbeing, utility, or whatever we may nominally call the outcome of interest. At the maximising end of the utilitarian spectrum, we’re necessarily asked to reduce this concern for our intrinsic wellbeing to be in proportion with than of every other individual in existence, to a fraction of a billionth of our total consideration, with the caveat we should not go beyond the point of marginal utility where we would create a requirement for ourselves to be the ones in need of aid.

What if each and every person discounted their own wellbeing to the point of marginal utility? It may be argued that the world envisaged by this would be a more fair and just one, but wouldn’t it be more desirable for people to live cooperatively aiming to maximise the wellbeing for themselves and those around them whom they arguably know best and are best able to help, whilst maintaining some level of altruism toward strangers and broader society, and to not reduce their own wellbeing so much as to take away from the pleasures and experiences of life we’re agreeing to give up?

It is reasonable to believe that any individual may find it difficult to only consider their interests for one fraction of a billionth of their total life’s duration. So, what is the alternative for those who agree with the utilitarian premise, but not with the fraction of attention to self. If we adopt a satisficing, rather than maximising view of utilitarianism, we immediately relieve ourselves of the burden that is placed upon us by its maximising counterpart. We would still not find any justification for buying an expensive pair of shoes rather than donating, but perhaps if we had given a predetermined 10% of our income to charity that year, we would be justified in buying a comfortable couch to sit and enjoy.

The satisficing approach is a more psychologically realistic approach for humans, who tend to rely heavily upon heuristics for decision making and behaviours. Landmark work by Tversky and Kahneman has demonstrated that people tend to fall back on set ‘rules’ or heuristics rather than rational reasoning for many of the decision we make (13). Decisions about morality are no exception to this. It fits therefore, that as humans with a certain psychology, that a rule-based approach that is in keeping with a satisficing consequentialism would be a good fit for a framework of moral action to carry out in the real world. In taking this approach, we’re not burdened with the constancy of making cost-benefit analyses for each decision we’re faced with in our day to day lives. Of course, this does not mean we can simply go about life without regard for morality, indeed we should still consider seriously the obligations that Singer lays out to help those in greatest need, it is just how we should go about trying to fulfill this moral obligation.

The consequentialist community has acknowledged this requirement and it has become evident in recent years. The development of branches within the effective altruist movements such as the Giving What We Can pledge (14) are reflective of the traditionally maximising communities move toward a more satisficing approach. The Giving What We Can pledge asks signatories to commit to giving 10% of their income across their careers, an amount which if universally taken up would be enough to eradicate much of world poverty. This is appealing to the individual, as they can pull the proverbial child from the pond without having to expend tireless mental effort to do so at cost to their own wellbeing.

Conclusion

Utilitarianism offers one approach to ethics as a high moral theory. Within utilitarianism, many approaches and dichotomisations exist, one of the more important of which is that of maximising versus satisficing approaches to our obligations. The work of maximising utilitarians such as Peter Singer is harmonious with the cost-benefit calculus-based approaches used to problems in global health, economics, and other quantifiable issues on a societal scale. However, the individual must also account for their own wellbeing, and psychology, and reducing oneself to a truly agent neutral standpoint is not realistic. Therefore, the satisficing approach to utilitarianism is a more appropriate approach for the individual, one that allows them to take part in all the experiences of life, and help their fellow man through rule-based altruism.

References

1.         Steele K, Stefánsson HO. Decision Theory: Stanford Encyclopedia of Philosophy; 2015 [updated 2015-12-16. Available from: https://plato.stanford.edu/entries/decision-theory/.

2.         Mill JS. Utilitarianism: Electric Book Company; 2000.

3.         Arrhenius G, Ryberg J, Tännsjö T. The Repugnant Conclusion: Stanford Encyclopedia of Philosophy; 2006 [updated 2006-02-16. Available from: https://plato.stanford.edu/entries/repugnant-conclusion/.

4.         Portmore DW. Maximizing and Satisficing Consequentialism: PhilPapers; 2006 [Available from: https://philpapers.org/browse/maximizing-and-satisficing-consequentialism.

5.         Singer P. Famine, Affluence, and Morality. Philosophy and Public Affairs. 1972;1(3):229-43.

6.         Altruism CfE. Combining empathy with evidence: Centre for Effective Altruism; 2021 [Available from: https://www.centreforeffectivealtruism.org/.

7.         80 H. How to make a difference with your career: 80,000 Hours; 2021 [Available from: https://80000hours.org/.

8.         Robinson LH, JK. Benefit-Cost Analysis in Global Health. SSRN Electronic Journal. 2017.

9.         de Lazari-Radek KS, P. The Point of View of the Universe: Oxford University Press; 2014.

10.       Amati V, Meggiolaro S, Rivellini G, Zaccarin S. Social relations and life satisfaction: the role of friends. Genus. 2018;74(1).

11.       Sen A, McMurrin S. Equality of What?: Cambridge University Press; 1980.

12.       MacFarquhar L. Extreme altruism: should you care for strangers at the expense of your family? 2015 2015-09-22.

13.       Tversky A, Kahneman D. Judgment under Uncertainty: Heuristics and Biases. Science. 1974;185(4156).

14.       GWWC. Pledge to give more, and give more effectively 2021 [Available from: https://www.givingwhatwecan.org/pledge/.

The weight of impact

The notion of increasing one’s reach and ability to do good is something that many strive for. Necessarily, it takes the individual away from the path that may have otherwise been forged, in search of a higher impact avenue. This can mean making quite significant diversions from what might have otherwise been an attractive path. It will also likely mean sacrificing the optimisation of other aspects of one’s career in the pursuit. 

Traditionally, there are a number of aspects to be considered when making a decision on which career path to pursue. These may include things such as compensation, the day to day satisfaction or fun of the work, altruistic motives, forging a sense of purpose, and the social status attached to the role. The aspects of a career choice may be more numerous, or divided up differently than this, but this is the distinction I will draw for the purposes of this post. Each person places a different weight on each aspect of any one career and decides what is the best fit for them to pursue. 

This is an active process, and if done correctly means that the individual does not simply default into any random career path. They reflect, make weighted conscious decisions, act accordingly, and iterate the process to ensure they’re staying on track as time goes on. As their values change over time, they too may shift focus or direction. 

The career advice within the effective altruism community argues for a much greater focus on the altruistic facet of career choice. Given the interconnected nature of the aspects of a traditional career choice framework, this tends to work relatively well. Compensation is seen as a means of personal resource acquisition, which would be best used establishing a level of personal comfort with any excess being used altruistically. Focus areas are often seen as meaningful, giving a strong sense of purpose and satisfaction. Many roles themselves are prestigious, and if not, the prestige is for-gone in place of more noble factors. 

In this framework, the issue of fanaticism (that the expected value calculation will be dominated by a fractional credence in a choice with incredibly high stakes) dominating the set of factors is something that needs to be taken into account. This is of particular importance when considering working on existential risk. If there is a career path that would be less enjoyable and mean strain on personal relationships, but have a chance at moving the needle on a catastrophic global risk, should I take it? The expected value calculation to humanity would echo a resounding yes, but of course it is not that simple. 

The mashing together of the aspects of career choice to a single measure of value is not something done deliberately. In fact there is express direction from EA organisations (80k) to consider aspects of a career outside of impact, such as personal fit, with a significant amount of weight. I believe collapsing to a singular focus on impact is more a fault of reasoning on a personal level. It is very difficult to maintain a balanced view of ones trajectory, and much more simple to focus on something (somewhat) quantifiable like impact. 

The issue of the collapse to a singular focus on impact can put at risk the very essence of the project itself. Becoming narrowly focussed on something not directly tangible, such as ‘impact’ can be a recipe for burnout. Especially if it draws you away from things you may otherwise find intrinsically rewarding and enjoyable, but that may have a smaller amount of direct utility. This can be seen in a somewhat similar light to the more commonly seen issue of people seeking to maximize their income, to the detriment of any sense of enjoyment or work-life balance. It is almost never sustainable, and inevitably leads down paths that may have otherwise been seen as undesirable. 

Focussing on impact is different from focussing on money in significant ways. The question is how to continue with the pursuit of a higher impact, without compromising on the equally important other, more ‘personal’ aspects of career choice. 

I believe it is important to consider the various aspects of a career choice, namely compensation, satisfaction, impact, purpose, social standing, in a more discrete fashion as a first step. Each aspect should be considered independently prior to making trade-off between them. For instance, for a particular choice, say being a clinically practicing doctor, each of the aspects should be considered independently, then the same should be done for another option, such as a full-time academic working in biosecurity research. One should then consider how much weight they would want to give to each aspect. Perhaps it is 50% impact, with the remaining 50% divided evenly between compensation, satisfaction, purpose and social standing, perhaps it is some other division. This division needs to be honest, and as much as possible an accurate reflection of what the individual thinks to be true of themselves. Then, the choices can be weighed accordingly. 

This approach avoids the potential for a single option to landslide knock-out alternatives because of a potential extreme result from one particular aspect. The process should not be a simple addition of the aspects, it should be a weighted sum. Further, we should iterate and constantly reflect, as we move forward and gain new information, we should update our decisions and readjust accordingly. At all stages this should be as active as possible, and we should aim to recalculate the weighted sums so as to not slip into a narrow focus. I believe this approach would allow one to follow a more personally satisfying, and overall more impactful career path. 

Maximising temperance

This is more of a reflective piece. I have been thinking a lot about temperance recently, and whilst I don’t necessarily agree with all that virtue ethics has to offer, temperance in particular has certainly been of benefit to me when trying to think about maximising outputs.


At a high level I tend toward the consequentialist field of the moral landscape. Whilst I do have some level of moral uncertainty about exactly what we ought to be doing with our time and efforts, I certainly agree that all else equal if we’re able to improve the lives of others that that is a good thing. I also agree that if we’re able to improve the lives of two people instead of one for an equivocal unit of effort, then we ought to do that. 

This is fine for a high level approach to one’s morality, however it tends to scale poorly to an individual level. If I am able to produce 1 unit of output for 1 hour of work, which in the grand scheme of things has some net positive effect on the world, then I ought to produce as many units as I can. It is obvious how this logic almost immediately fails when we try to apply it. We are not machines, we need to sleep, eat, exercise and have meaningful social interactions. Producing 24 units of output per day is impossible. But maybe there is some lesser value that is possible whilst maintaining all of the necessary conditions for staying alive and productive as a person. 

What I have found when attempting to pursue this mindset is that any time spent outside of producing work or fulfilling the necessary conditions of life is seen as a missed opportunity for producing units of output. Again, quite obviously, one could imagine how this self berating approach is not optimal for one’s overall well being or happiness. It’s probably not ideal for their level of output either. Most people have probably experienced the diminishing returns from pushing oneself to their limits with study, work, exercise or almost any output. A great proportion of the benefit is weighted in the early stages of the activity, and the additional expended effort provides little additional output for the amount of effort required. 

So taking a direct approach in attempting to maximise units of output has not successfully allowed for the maximisation of output. I have recently attempted to make a shift in mindset from maximising units of output directly, to maximising temperance. Temperance is one of Aristotle’s core virtues. In virtue ethics it exists as the mean of the pleasure/pain continuum, between self-indulgence and insensibility. Attempting to optimise for temperance has been a qualitative shift which struck me immediately as an intuitively useful way of trying to avoid the ‘burnout’ associated with directly attempting to maximise units of output, whilst improving the absolute level of positive output from one’s work. 

Combining this approach with trying to take on a more satisficing (as opposed to maximising) approach to my individual work, I have certainly felt as though the burnout-like effects associated with directly attempting to maximise outputs have been greatly subdued, and that I’ve probably been more productive in what I have been able to achieve. This has applied outside of ‘work’ related activities as well. I recently took up running and was injured from increasing my training volume too quickly. Once I had recovered, taking a temperate approach and slowly increasing distance, has allowed me to surpass my previous best times, with far lower levels of perceived exertion.

Autonomy and Vaccination

I wrote this as an attempt to formalise my position on the issue of vaccination, which has been highly topical recently. It is by no means perfect, but I feel as though I learnt a lot from attempting to process and argument into the form of a syllogism and make it as coherent as possible. It is a lot more difficult than I had anticipated.


Abstract: Vaccination is a contentious issue, and the move toward mandatory vaccination policies during the COVID-19 pandemic have reasonably raised questions about their violation of the principle of respect for autonomy. In this article, I seek to formalise the argument that mandatory vaccination is at odds with respect for autonomy, then assess the argument from differing conceptions of autonomy. I conclude that mandatory vaccination does violate the principle of respect for autonomy, but that this is justified in the case of a pandemic with an available safe and effective vaccination.

Vaccination is an issue of great importance, and of course most topical in the midst of the COVID-19 pandemic. Mandatory vaccination seems as though it may become commonplace either explicitly, as is being seen in certain workplace policies, or implicitly as vaccine passports and similar incentives filter into use. Vaccination holds a unique position amongst medical interventions. In most cases, it is first and foremost a preventative measure, meaning its benefits are often concealed from observation. A successful vaccination causes no event to take place, which psychologically makes it difficult to appreciate without the help of statistics proving its worth. Vaccine biology can be complex and can vary widely from vaccine to vaccine, and can evade understanding even amongst medical professionals. As a result, individuals who receive vaccines are often doing so without a full understanding of the process being undertaken. Further, in general, vaccines are often administered at a stage of life where individuals are unable to conceptualise the process or consent to the procedure, namely in childhood, which introduces an ethical dimension in that the intervention often needs to be carried out in the ‘best interests’ of the recipient, which differs largely in the case of COVID-19. To add what may be the most entangling aspect for vaccines, is that they relate to infectious disease. As a result, a mismatch is established in which an individual receiving a vaccine must take on all associated risk and responsibility, for the benefit of themselves as well as the community around them. 

Unsurprisingly vaccination is a hotly contested issue. What are empirically considered safe and effective vaccinations for COVID-19 have been developed, manufactured and implemented in public health campaigns in record time. For sceptics however, vaccination, and especially mandatory vaccination, can be seen as an impingement on autonomy, and overstepping of government or state into the realm of individual liberty and choice. If an individual should choose to not undergo an intervention, should this not be their choice? 

To unpack this position further let us formalise this common position held by individuals against mandatory vaccination on the basis of interference with autonomy, and then examine each premise and the conclusion:

            Premise 1: Respect for autonomy is a fundamental principle of medical ethics

Premise 2: Autonomy implies choice regarding undergoing or refusing medical treatments or interventions, including vaccination

            Conclusion: Mandatory vaccination does not respect autonomy, and is thus unethical

Premise 1

Beauchamp and Childress place respect for autonomy front and centre amongst their ‘four principles’ approach (1). They reason that for the purposes of ethical debate in respect to biomedical ethics, the four principles can be grounded in common morality, and act as sufficient philosophical tools to resolve dispute around contentious issues.

The inclusion of respect for autonomy as a central principle for biomedical ethics emerged in response to changing perceptions of the paternalistic style in which healthcare was practiced. An extreme instance sparking debate over the importance of respecting autonomy being Madrigan v Quilligan, where nearly 200 women on non-English speaking background were coerced into surgical sterilisation by medical staff (2). A move toward patient-centred models of care (3) has meant that in a practical sense a respect for individual autonomy has only become more important.

Whilst philosophical debate about intrinsic value of autonomy is complex, and remains contested (4). Resolution of such a debate is beyond the scope of this article, although it appears we have sufficient grounds to accept the premise that respect for autonomy should be a fundamental ethic in biomedical ethics.

Premise 2

We can accept that autonomy should be respected, but a working definition of autonomy and what it subsequently implies for the individual is itself a difficult issue. Beauchamp and Childress champion a classical ‘decisional’ account of autonomy. Libertarian accounts of autonomy are founded in Mill’s account of liberty, and focusses on minimal state intervention as a means of arriving at autonomy (5). Each of these conceptions offer a differing perspective on the concept of autonomy. Let us examine each of these conceptions and their implications for mandatory vaccination.

Decisional conceptions of autonomy state the following conditions under which a decision should be considered autonomous: It must be intentional; the individual must be able to understand the decision; the decision must be voluntary and without influence of external forces (6).

Of particular interest in the case of vaccination is the final condition. It appears under the decisional conception of autonomy, that mandatory vaccination would not fulfil the criteria. In  discussion of forgoing moral obligations in Chapter 1 of Principles of Biomedical Ethics, Beauchamp and Childress state:

 “Compelling justifications are sometimes available. For example, in circumstances of a severe swine flu pandemic, the forces confinement of persons through isolation and quarantine orders might be justified.” (1)

We must consider if mandatory vaccination would be a justified circumstance to override the account of autonomy. Within the four principles framework we turn to accounts of beneficence, non-maleficence and justice. In respect to the prevention of spread of an infectious disease with a safe, and efficacious vaccination, beneficence is upheld on appeals to public health and its implications for the individual’s wellbeing, namely that prevention of a harmful infectious disease is in the best interest of the patient and community. Non-maleficence, or doing no harm, would suggest a vaccination that can prevent a potentially serious condition for little risk should be administered. Accounts of justice in relation to vaccination suggest the principle is best upheld where those who have access to and are medically able to be vaccinated do so, in order to protect those who do not have access to vaccination, or whose only means of protection against infectious disease is by means of heard immunity (such as children, or immunosuppressed individuals unable to receive certain vaccinations). Whilst on a decisional account, mandatory vaccination would not be considered an autonomous decision, it would nevertheless be justified on account of its appeal to justice, beneficence and non-maleficence.

Mill’s on liberty is a classical and foundational account of libertarianism. It seeks to outline the nature and limits of the power and influence the state can exercise over the individual. Libertarian accounts of autonomy have been utilised in the justification of relatively controversial issues such as organ sales (7), and physical and cognitive human enhancement (8). Although this account of autonomy is liberal in respect to cases which impact only upon the individual decision maker, the Harm Principle acts as an important caveat:

“The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.” (5)

In consideration of a pandemic, where an efficacious and safe vaccination is available, we can prevent, or at least reduce the risk of, harm to others in the spread of an infectious disease by being vaccinated. Thus, on a libertarian account, whilst mandatory vaccination would not be in keeping with the conception of autonomy, autonomy would be overridden by the Harm Principle.

On both the tradition account of autonomy espoused by medical ethics, and a liberal account of autonomy, mandatory vaccination does imply choice regarding the refusal of medical treatment. However, the mandatory use of an effective and safe vaccination in the case of a pandemic constitutes an exception on both accounts. Premise 2 can therefore not be upheld.

Conclusion

We have reasoned that autonomy should be considered a fundament medical ethic, at least in a practical sense, and that whilst policies either implicitly or explicitly mandate vaccination do not respect individual autonomy, mandatory vaccination in a pandemic does represent a specific and unusual case that justifies overriding this principle. For this line of reasoning, we cannot conclude that mandatory vaccination is unethical on claims of violation of respect for autonomy.

Conclusion

Mandatory vaccination in the context of the current COVID-19 pandemic raises reasonable questions as to its respect for individual autonomy. On both classical and libertarian account of autonomy is holds that mandatory vaccine policies do not respect individual autonomy. However, pandemic infectious disease represents a special case in which the public good and community health must take priority over respect for autonomy and can thus be ethically justified. 

1.         Beauchamp TL, Childress JF. Principles of biomedical ethics. 7th ed. New York: Oxford University Press; 2013.

2.         Martinez R. NWSA Journal. 2009;21(3):210-6.

3.         Greene S, L T, Cherkin D. A framework for making patient-centered care front and center. The Permanente journal. 2012;16(3).

4.         Varelius J. The value of autonomy in medical ethics. Medicine, health care, and philosophy. 2006;9(3).

5.         Mill J. On Liberty: Yale University Press; 1982.

6.         Mackenzie C. Autonomy Abingdon: Routledge; 2014

7.         Kyriazi H. The ethics of organ selling: a libertarian perspective. Issues in medical ethics. 2001;9(2).

8.         Corbellini G, Sirgiovanni E. Against paternalistic views on neuroenhancement: a libertarian evolutionary account. Medicina nei secoli. 2015;27(3).

We are not agent neutral

One feature of utilitarianism is it’s presupposition that all human life should be of equal value. This is not a controversial claim for many people, and is reflected in our actions making donations or doing work to benefit those we are not personally affiliated with, or who we may never meet. Agent neutrality is the underlying force at play here, each person is a moral agent who is valuable in and of themselves. This is opposed to agent-relative theories, which in the circumstances of altruism would give particular weight to a certain individual if they were of a particular relation to the decision maker. 

Agent neutrality is a strength that allows consequentialist theories to function at a high-level, for example, such as making population level policy decisions about resource allocation. It’s implication is that the maximum amount of good for the maximum number of people is what we should be optimising for, and that who those people are does not matter. The flip side to this maxim is that it does not matter who those are driving the changes to maximise the good for the maximum number of people. It does not matter if you take a job that allows you to drive an extra 100 units of good into the world, or if someone else does, all that matters is the extra 100 units is ‘created’. This is the basis of the replaceability argument, which states that the amount of impact you have is not equal to the amount of good you ‘create’, but is equal to the difference in the amount of good you would create, compared to the next best person who would fill the position if you did not.

True impact (TI)  = Good produced by you (GY) – Good produce by next best person (GNB)

Psychologically it is obvious that agent neutrality is difficult when considering how to divide up our altruistic resources to give to others. It is a common human experience that we have a greater affinity for those near and dear to us, and that we wish to see them be happy and thrive. Further, we have somewhat of an intuition to want to give charitably to those who are in need and are immediately in our field of awareness. This is an instance of the availability heuristic, we are more likely to feel a pull to give to the homeless man we walk past on the way to buy a coffee, than the child dying of malaria who we are blissfully unaware of. 

I believe it is also equally as difficult psychologically to properly frame your impact in terms of TI rather than GY. Whilst logically one can appreciate the calculation, it certainly doesn’t seem that way when you’re the person spending your career or life’s work on an issue. For instance, considering a medical career as a doctor. This analysis estimates a +2600 QALY impact for a doctor working in the UK before accounting for replaceability (equal to the GY), reduced to about +760 QALY impact once accounting for replaceability (equal to the TI). The TI here should be what we consider for a doctor, considered as simply a moral agent, but what about for you as a doctor. It is difficult to reconcile the fact that although you’re providing some level of good in the world, it is being taxed by your replaceability. 

Of course, just because this is a tough pill to swallow it does not discount the approach as a whole. I believe the approach to impact accounting for replaceability is nuanced and appropriate, but I think it may be over emphasised to some degree. Perhaps prospectively one should consider the TI when making career decisions, but choosing a direction with a low TI, but a high GY is also a reasonable option. Whilst it is true that the GNB may also be high, you’d also be allowing that next best person to pursue an alternate path. Perhaps you would be able to find an alternate path that has a higher TI, but perhaps the person you are displacing would be equally as likely to find that path? It is often countered here that if you’re a person thinking in terms of marginal impact, that you would be more likely than someone from the background population to pursue a route with a higher TI. I agree with the ethos of this point, but think the value of it may again be overweighed. Paths with a high TI are neglected and therefore can be difficult to make progress in. Simply having a belief you would be a candidate to be the individual to make that progress does little to actually make the impact. 

One approach I have found useful when trying to consider these issues is to recall one’s position as one of several billion people in the world, within one short time frame. The overwhelming likelihood is that whilst doing the most good is a nobel and highly utilitarian aim, framed from a universal point of view the likely difference between doing the most good (i.e. maximising your TI) and doing good (i.e. aiming for a high GY) will be small. There is of course a greater chance of having a universally large impact, but ultimately some is better than none. 

Agent neutrality is a powerful aspect of consequentialist moral theory. It is not so easy for an individual trying to make individual decisions however to fully comprehend and psychologically frame its implications. Whilst one may be able to maximise their impact by considering replaceability, this entirely discounts the fact that from the individuals point of view, it is their impact that they’re ultimately concerned with and have the most control over. The fact of agent neutrality makes utilitarianism difficult to accept in some ways, but is an important strength we need to be able to balance with our own individual approach to the world. 

Is becoming a neurosurgeon immoral?

If something good can be done, and it isn’t hurting anyone, it’s a good thing to do more of it. If we can save more lives for the same cost, not only is this a good thing, some may say it’s morally obligatory. So what do we have to say in the case of those of us who go into narrowly focussed careers that require years of training and countless hours? Couldn’t all that effort, all those resources be put somewhere else more effectively? Are these circumstances morally analogous to choosing to donate our money to an ineffective charity? 

Becoming a neurosurgeon is a tough slog. In the US, the training timeline for a neurosurgeon goes something like this at a minimum: 3 years undergrad, 4 years medical school, 6 year residency, +/- fellowship. That’s 13 years of work from the end of high school, which for a perfect applicant would see them practicing at the age of 31. The average age of a commencing medical student is actually around 24, meaning the 31 year old neurosurgeon is probably a rarity, with an average around 34-35. Additionally, individuals undertaking this route are unlikely to be sticking to a 40 hour work week; hours have recently been capped to 80 hours per week, but may exceed that at times. This means in the 6 years of residency, a neurosurgery resident is doing the equivalent of 12 years of 40 hour work weeks.

This is obviously an incredibly tough route, that requires a lot of stamina and commitment. Neurosurgeons develop the epitome of specialist knowledge and procedural skills to be able to handle complex clinical scenarios, adapting their approach for each individual patient and case. I don’t think we would want it any other way, after all, it is brain surgery.

One does have to consider the moral implications when viewed from the traditional effective altruist lens. If one were to give a large quantity of money to a charity that distributed in inefficiently and resulted in only a small absolute improvement in the state of those it were directed toward, when it could have been given to a tried and tested organisation that would maximise you bang-for-buck, we would likely see an obvious space for improvement in the individuals actions. How are we to think of the individual who becomes a neurosurgeon, no doubt making a large personal sacrifice, showing commitment and service to others, to assist a relatively small number of people, when with the same level of commitment they may have been able to make ample contribution to an area such as reducing catastrophic biological threats? 

The analogy between charity and surgeon is not perfect. One could argue that it would be reasonable to re-direct all ineffective donations to more effective charities, but it would be more difficult to argue that all neurosurgeons should go back to choosing what career they would commit themselves to and choose differently, leaving us without surgeons to take out brain tumours and evacuate bleeds (assuming we got the message to and convinced all neurosurgeon would-bes and no one filled their place). This is an important distinction, where neurosurgeons are a fundamental part of the society we wish to live in, but ineffective charities are not. In order for us to logically uphold this, we need individuals willing to show the level of commitment and personal sacrifice necessary to be trained accordingly. 

The contrary argument could be made if we don’t assume that neurosurgeons are a necessary part of society. Perhaps it is indulgent of modern day society to allocate such resources to the training, development and progress of a field that seems to move the needle relatively little in the grand scheme, when elsewhere in the world hundreds of thousands die from malaria, a preventable and treatable disease. For practical purposes this view may be a little contrarian, but needs to be considered morally. Within the constraints of our society as it stands, it seems as though we can write this argument off as unlikely to be upheld practically (although I think it is an important point from a more philosophical point of view). 

Should an individual who is on the brink of making a career decision decide to go down the long, windy and sleepless road toward neurosurgery, or should they stop and consider the implications. From this perspective, it seems to be an easy argument to make from the effective altruism point of view that working on something with a greater impact is the moral way to go. Neurosurgery is highly competitive, and one individual bowing out of the race would quickly be replaced. An additional person working elsewhere could have a much greater impact. 

For the individual who cannot see themselves anywhere but operating in the squishy interior of peoples skulls, who has the fortitude and commitment, there doesn’t seem to be anything morally abhorrent about following their passion, at least in a society that deems neurosurgery a core aspect of its fabric. It is far preferable to doing nothing at all, or even to a career that requires little skill development or commitment, as other individuals who do not end up following the path, will likely be diverted elsewhere. It goes without saying that this framework could be applied to a wide variety of careers, including a wide variety of medical specialities, and others that require vast amounts of personal investment.

One should think deeply about what they wish to achieve through such a career however. If the goal is making an impact on the lives of those around you, as mentioned above there is a lot to be said about choosing an alternate route. This is especially the case if you’re an individual who is willing to show high levels of grit and conscientiousness, as you could have the potential to do a lot of good in this world, and into the future. 

How long would I live as an oyster, and other questions for utilitarianism

Utilitarianism is where we find the roots of rationalistic altruism. At a glance, it’s enticingly intuitive, we seek to maximise the most good, for the most people. As utilitarians, we do not give regard to an individual’s age, location, nationality, race or any other arbitrary categorisation, the aim is to optimise this outcome for each and every individual. Simple and fair, right? In this model, it follows that we may be able to maximise the good for the world by either raising the amount of ‘good’ or ‘utility’ any one individual experiences, or by increasing the amount of time they experience it for, across their lifespan. As we take an agent neutral perspective, we could also increase this amount of good by increasing the amount of beings experiencing a positive life.

This is the calculation in its most simple sense, the conception of what the good actually looks like is a much debated topic with nuance and controversies of its own. Let us for the sake of this discussion attribute an arbitrary scale where -10 units (U) would be attributed to a life of pure suffering, with no utility whatsoever, that would not be worth living for any individual, to +10U, which would constitute a life of pure bliss, pleasure and fulfillment, and the ideal to which we aspire. A neutral life, of score 0U would be one that is neither good, nor bad, pleasure and pain are in balance, or do not exist at all. A life at score 0U would neither be worthwhile, nor worthless. 

Following on from our moral calculus and our obligation to maximise the good in the world, we have a few options going forward. A first step may be to increase the quality of lives for people that currently exist, as to maximise the overall good in the world. By increasing the mean of every individual’s quality of life by 1 unit, we could render an additional 7.64 billion units, that is no number to scoff at. This is a relatively non-controversial point that most who feel the pull of consequentialist conclusions would agree with, and what work in global development seeks to address. 

What about for the future then? How should we aim to maximise the units of good? 

Let us for the sake of this argument assume that we agree on the point that all sentient life matters to the extent to which it is sentient and can perceive the good. In this model perhaps a fish could experience around 20% of pleasures or pains of a human, and thus be capable of adding or subtracting 2U to the overall total we’re seeking to maximise (the proportionality here has no practical grounding, and is simply meant to exemplify the point). From here I will refer to any sentient life capable of experiencing some level of good as an agent

It is also the case that the total number of lives lived on our planet is necessarily limited by our available natural resources. The United Nations has predicted that our earth’s population will likely peak around 2100 with around 11.2 billion people. This is already beyond what is sustainable given the ecological constraints of our planet, so we cannot simply conclude that the best way to maximise the good is to increase the number of people living worthwhile (i.e. >0U) lives. 

What if we could reduce the necessary resources each individual agent requires to sustain a life at a level >0U? 

It is reasonable to agree that the correlation between resources and perceived happiness (or some other crude measure of wellbeing) is non-linear, and that increasing amounts of resources would be required to satisfy a +1U increase in wellbeing as we move closer to the ideal state of +10U. For example imagine an individual living in poverty, without shelter, access to clean water, or a reliable supply of nutrition. Providing a small $10 weekly payment for food, water and shelter would do a lot more to improve their well being than it would for a banking executive in a major city. Again this is a non-controversial point and forms the basis for many utilitarian conclusions. 

So we get a vastly better return on investment at lower levels of wellbeing. It would then follow that if we were to aim at maximising the total units experienced by agents, that a universe in which there are many agents that experience very low, albeit positive, levels of wellbeing (think 0.1U) would at some point yield a more valuable universe than the one we currently live in. Say a world where we maximise our oceans for the number of oysters that exist in the ocean, all of which sit and filter sea water for their lifespan achieving a consistent marginally positive level of wellbeing. 

Derek Parfit first arrived at this result known as the The Repugnant Conclusion, which is stated as follows: 

“For any possible population of at least ten billion people, all with a very high quality of life, there must be some much larger imaginable population whose existence, if other things are equal, would be better even though its members have lives that are barely worth living” (Parfit 1984)

Now, take a human life of 80 years, that has a mean +5U of well being. We could convert our 80 years of human life, or +400U-years to an equivalent +400U-years of oyster life. If the oysters experience an average of 0.1U of wellbeing throughout their life, we would get 4000 years as an oyster to arrive at the total. 

From an agent neutral perspective this appears sound reasoning. Would we, however, as agents who experience a broader range of well being than the oysters, be willing to accept this fate and convert to oyster lives? Would we be willing to give up a life in which we experience enduring and lasting friendships, a favourite meal or some other highly valuable ‘things’ (however we define it), for a life of mildly enjoyable water filtering. 

From any individual agent’s perspective the life of greater range appears to be more appealing, and of course this makes perfect sense. As a human agent, living a 4000+ year life as an oyster goes pretty strongly against my intuitions. From my point of view, if I were a competent judge (Mill argues that only an individual who has experienced both higher and lower pleasures is capable of judging the value of the two experiences against each other and ranking them in terms of preference) then I would probably choose human life, for the range of experiences it offers, and the peaks in well being that can be achieved. 

So there exists a tension here between the agent neutral and agent centric perspectives. Now, what if we consider that opposite case, if we could live a much shorter life, but with a much greater range of well-being. For instance, perhaps technology becomes available where one is able to be uploaded to a simulator experience machine in which wellbeing up to +100U is achievable. In order to match our human life with +400U, we would only need to spend 4 years in the machine. Is the trade off one we would be willing to accept?

This seems more difficult to inuit, as we have never been capable of experiencing such levels of well being. The agent neutral moral calculus holds here that we should at least hold these two circumstances as equal, and more than likely we should prefer the experience machine scenario as there’s a greater density of wellbeing achievable, and a greater potential to increase the total number of U in the universe. This pretty quickly reduces to an argument for a world in which the experience machines run at full capacity for as long as possible, and the existence and well being of humans has relatively little to contribute. Perhaps this is even more repugnant than Parfit’s initial conclusion.

Factoring impact into the career decision equation in medicine

Here is a piece I wrote for Effective Altruism Medicine. I had spent a lot of time thinking about how one can increase their impact whilst working in clinical medicine, as doctors typically pour copious amounts of time, money and effort into their careers. For most people entering medicine, wanting to do good in the world is at least part of the reason for entering the field, but relatively very little time is spent trying to figure out how one should actually go about this.


Most doctors go into medicine with the broad aspiration to do good with their careers. This desire may coexist with various other motivations, such as intellectual engagement, pursuing scientific interests, or having a stable job and income (both of which can be instrumentaly valuable for other goals), but it is nevertheless an important objective for many. We argue here that for doctors for whom impact is an important consideration, there are concrete steps that can be taken to have an impact that far outweighs what is achieved in a ‘typical’ clinical medicine career.

Medical doctors are commonly viewed as exemplary do-gooders in society, working for the benefit of patients and bettering the world one person at a time. Running the numbers, it is evident that this may be somewhat overstated. Quantifiably impactful careers tend to be scalable, and improve the lives of many, which is not a reality of being a doctor seeing individual patients in a clinic. Further, the medical career path does not tend to efficiently select the most talented individuals into the most impactful positions. More often highly sought after-technically driven areas attract those at the top of the field, whilst careers with the potential for systemic or population based change, like those in public health, tend to be undersubscribed.

If one weighs impact in their career heavily, they may have good reason to investigate this question further, and potentially change the trajectory of their career on its basis. How one chooses to go about this is highly individualised and will depend heavily on factors such as career stage, and the overall weighting of impact for the individual. 

Current Estimates of a Doctor’s Impact

Estimating the number of lives saved by a doctor is difficult to quantify. To do so, Dr. Gregory Lewis, and Benjamin Todd of 80,000 hours have performed an analysis based on global and UK data to come to a particularly noteworthy result. The resulting statistics show that the marginal impact of a doctor, working within a developed country with an established healthcare system, will on the margin, save approximately 600 Quality-Adjusted Life Years (QALYs), or around 20 lives over the course of their career. This is, of course, a remarkable amount of good to have done. However, many doctors may not realise that a greater positive impact may be achievable, whether that be through donations or focussed work. For example, it has been estimated that When compared to donations to effective charities, in which one can save one human life for approximately $3000-5000 USD. If a doctor donates $5000 USD of their income per year to an effective charity, then after ~20 years of working they would probably have already saved more lives with their donations than they’d be expected to save throughout their entire medical career. For a medical student or young doctor who has the option to enter other, more lucrative fields, they could stand to have an even greater impact by choosing a higher-earning profession. 

Lewis and Todd’s derived their conclusions by examining the improvements in global life expectancy over the past 100 years, then factoring in the likely attributable proportion of medicine (about 16-18% of total mortality and morbidity between 1900-2000) on overall improvements in life expectancy. Considering the additional reduction in disability-adjusted life years (DALYs) from medicine (to account for improved quality of life), the result is approximately 90 lives per doctor. This isn’t were the calculations end, going on to account for diminishing marginal returns (the fact that one less doctor in a health system, likely wouldn’t reduce the overall good done by it) and replaceability (the fact that if you weren’t to be a doctor, it’s very likely that someone else would be), the figure come to around 20-25 lives per doctor. For further details on the methodology, the blog post itself is well worth the read.

The analysis goes a long way to quantifying what may seem at first thought as an immeasurable factor (fans of Douglas Hubbard’s How to Measure Anything would find the analysis particularly satisfying). Whether or not we accept the exact figures concluded by Dr. Lewis, it serves as a useful measure for those interested in the intellectual question of a doctor’s impact, or for those individuals thinking of applying to medical school who weigh potential impact heavily in their basket of important career outcomes. However, its applicability to physicians already working within the medical profession may be less clear. 

As in most fields, careers within medicine vary greatly in the amount of impact they’re able to achieve. Further, for each of these career paths, an individual travelling along will find that at each stage, varying levels of seniority and decision-making ability mean that their ability to influence patient care, local health system policy, or research agendas will change in tandem. So, how is it then that individuals mid way through travels along such a path should seek to maximise their impact? 

Model of impact across a medical career 

The trajectory of a typical medical career can broadly be split into the sections of resident/registrar and attending/consultant, or for the sake of argument, let us say ‘early’ and ‘late’. Of course, in reality, a number of stages exist in between, but we will take this structure for simplicity. As previously mentioned, the above calculations take into account all doctors within the health systems identified and averages the returns across this total. This is a satisfactory statistic for an outside view of a medical career as an entity, but struggles to capture the variations within a medical career. 

Most doctors would certainly agree that years as a junior are quite monotonous, filled with typing, signing, and making repetitive phone calls. Most tasks carried out on a day to day basis are under the instruction of those more senior to you, and the absolute level of autonomy you have is relatively limited. An early career doctor, for the most part, will not be making prognosis-altering management decisions, or performing complex and high-stakes procedures. It is quite clear here that in junior positions, doctors are an order of magnitude more replaceable than their more senior counterparts.

This of course is not unique to the field of medicine. In academia, individuals in their 50s and 60s publish almost twice as much as those in their 30s. If we take the average age of an attending qualifying at 35, there is a large stretch of time in which replaceability remains high in one’s early career. This suggests that for individual doctors, the majority of their impact will occur once they reach a point of seniority where their relative replaceability will be reduced, the impact in their career is significantly back weighted. Likewise, salaries tend to grow significantly once a level of seniority is reached, not following a linear trajectory. Further, with seniority comes the potential to be involved in certain influential axillary positions, such as directors of research, or policy making positions.

What this suggests is that medical careers, with their hierarchical nature, tend to have an inflexion point beyond which one’s impact tends to rise quite steeply over a short period of time (perhaps more so than other fields). Beyond this point of inflexion, it seems as though one’s direct clinical impact would level out considerably, albeit at a higher level than in previous stages of training (as medicine is not typically a career that scales well). What we end up with if following this model has a sigmoid distribution of impact across a career, with the point of inflexion approximating the point of transition from trainee to attending physician. 

Whilst this model describes an account for the shape of the impact curve within a typical clinical medical career, the range of the curve may vary considerably from specialty to specialty, or from location to location one practices within. Although the data is sparse, as a means of rough comparison, let’s compare an American oncologist, with an ophthalmologist working in the developing world. 

There are approximately 1.7 million new cancer diagnoses in the US per year, and approximately 13,000 oncologists. Assuming that each patient newly diagnosed sees an oncologist, an average oncologist therefore will treat 130 new patients per year. The 5 year survival rate for cancer (all types) has been increased from 50.3 to 67 from 1970-77 and 2007-2013, about 17%. 10 year survival rates for all causes combined of cancer are approximately 63%, suggesting that about 82% of those who survived 5 years will survive another 5, and would likely approach the baseline population rate as time went on (let us assume that they will live out the rest of their lives in keeping with that average life expectancy). The average age of a cancer diagnosis is 66, and the average life expectancy in the US is around 79 years. If we given all of the ‘benefit’ from these figures to the oncologists, we can make the cursory calculation that 14% of individuals diagnosed with cancer will live out the remainder of their lives thanks to their oncologists, that is about 238 000 people per year, or about 18.3 people per oncologist per year. Each of these individuals live for an additional 13 years, and let us generously assume a full quality of life for each, therefore gains 13 QALYs. Given our above sigmoid model of impact in a medical career, with the majority of impact starting at 35, and a retirement age of 65 for oncologists (therefore having about 30 years of impact), they have approximately 30 x 18.3 x 13 QALYs of impact, or approximately 7 137 QALYs. This figure does not account for the marginal benefit of an additional oncologist working within the field, just simply the average for an oncologist currently working within the field. 

An ophthalmologist, working as a cataract surgeon in a developing country such as India, within an institution and system that is well resourced can impressively perform approximately 15 cataract operations per hour. In practice, a cataract surgeon may perform approximately 20 operations a day.  These operations, whilst not necessarily life-saving, have been approximated at saving anywhere between 0.17 and 0.6 DALYs per patient. 47.5% of individuals aged over 40 were found to have cataracts in one cross-section Indian study, which increases steadily with increasing age. With an average life expectancy at approximately 70 years, someone receiving an operation at age 50 stands to avoid anywhere between 3.4 and 12 DALYs. Assuming a surgeon can work at a rate of 20 patients per day for a period of only 10 years (at 48 weeks per year) and taking a conservative estimate of DALYs, if one were to work within an efficient system, for 5 days a week, it seems as though they could stand to help patients avoid 163 200 DALYs. 

The above calculations have factored in a large number of assumptions, and do not account for myriad other factors that affect morbidity and mortality.  That said, the comparison and an order of magnitude difference in DALYs is stark nonetheless. The take away from this cursory analysis is that aside from the fact that both examples have had net positive impacts on the patients in question, simply applying an analytical model may produce results worth strongly considering for someone interested in having an impact.

Improving a doctor’s impact across their career 

Next comes the question of what one is to do with this information given that they’re looking to optimise for impact throughout their career. How one decides to alter their current career in order to have a greater impact will depend on a number of variables, including career stage, and relative weight placed on impact. Let us consider four different scenarios based on these variables, and how in each a doctor may choose to optimise their situation in each. We will divide career stage into trainee (‘early’) and attending (‘late’), based on the inflexion point in the sigmoid model described above, and weight placed on impact broadly into high and low (in reality, each of these exist as continuous variables, but will be considered as binary here for the sake of simplicity).

Scenario 1 – Early career, low weight on impact

A doctor in his/her early career has relatively a large number of career options in front of them. They may have a choice of specialties and subspecialties, areas of practice, or potentially alternate careers to traditional pathways all together. For an individual in this situation, who gives a mild weighting to improving their positive impact on the world, they may choose to go about this in a number of ways. They may consider choosing a specialty, or location to work in that allows them to satisfy their interest, maximises their personal fit, as well as improves their positive impact. Examples of this may be choosing to train and work in a location that is more constrained by total number of doctors, as to improve your marginal impact, or choosing a specialty that will have a relatively larger impact from direct work. One may alternatively choose to pursue a career that is slightly higher-earning than they might have otherwise chosen, and follow the pathway of earning to give. 

Scenario 2 – Early career, high weight on impact

An early career doctor who places a high weight on impact likewise has a number of options in front of them, however they may be more willing to make larger changes in direction. Doctors in this scenario may decide to direct their careers toward a well-defined EA cause area, such as working on global catastrophic biological risks. They may decide to train in a relatively neglected but high-impact area of medicine such as public health, or health policy. One may decide to go into biomedical research, in pursuit of advancing scientific knowledge as an avenue to impact. An individual may also choose to earn to give, and may take the Giving What We Can pledge. Of course, they may choose to combine earning to give with any of the other paths outlined above. 

Scenario 3 – Late career, low weight on impact

A doctor who has entered into their ‘late’ career, or the portion of their career where they’re likely to be making somewhat more of an impact .Given their stage of career and time invested, they’re more likely to be motivated to stay in clinical practice. In order to optimise their potential impact, however, they may choose to move their practice into a geographical location of greater need. Given their later career stage and potential to hold influential positions in policy or research, they may choose to direct decisions on research topics toward areas that may have applicability in other pressing problem areas.

With a very low bar for entry, giving to effective charities appears to be an easy way to multiply one’s impact many fold when working as an attending physician. Salaries are often much higher than throughout one’s training, and one may even choose to take on higher paying roles in order to donate more to organisations working on pressing issues. 

Scenario 4 – Late career, high weight on impact

As above, a doctor in their late career is somewhat more restricted in directing their clinical career. However, if someone were to place a particularly high weight on impact there may be a number of viable options in order to improve their scope. As described above, utilising their relative influence to direct policy or research agendas seem to be an impactful way to spend time. Additionally, advocacy work has been a route for physicians to have somewhat of an impact on various issues (at least anecdotally), it appears the advocacy pathway may be quite resource intensive. 

Again, earning to give appears to be a high-yield approach for someone in late career looking to maximise their impact. One who weighs this more may choose to scale up their giving, and if working within a high-paying specialty may be able to donate significant amounts. 

People are motivated to take up a career in medicine for a number of reasons; often among them is the hope to do good. Whilst doctors may have a less-than-intuitive direct impact through their work, there are certainly ways that one’s motivation to do good can be optimised. How one goes about doing this depends on the weight they place on impact, as well as the career stage they’re at. Regardless of the way in which one begins to think about impact, and no matter their career stage, there is certainly value in giving more thought to this question than is typically given. 

If you’re interested in reading more about earning to give:

If you’re interest in reading more about EA and cause areas such as global catastrophic biological risks (GCBRs):