Utilitarian: Difference between revisions

From metawiki
Line 61: Line 61:
== Can a TV Show Explain It? ==
== Can a TV Show Explain It? ==


[[File:Doug_Forcett.jpg|thumb|right]]
[[File:Doug_Forcett.jpg|thumb|right|If Doug Forcett Wrote a Wiki Instead]]


For those that prefer to get their ethics from sitcoms, it's basically the point system from [https://en.wikipedia.org/wiki/The_Good_Place The Good Place]. The show is actually a robust introductory course in ethics and is truly a ''good place'' to start learning about this subject if the idea of utilitarianism is new to you.
For those that prefer to get their ethics from sitcoms, it's basically the point system from [https://en.wikipedia.org/wiki/The_Good_Place The Good Place]. The show is actually a robust introductory course in ethics and is truly a ''good place'' to start learning about this subject if the idea of utilitarianism is new to you.

Revision as of 21:08, 11 January 2024

The ethics of metaculture is based on Utilitarianism, which in its simplest definition seeks the greatest good for the greatest number of people. Maximizing that which is good is an obvious goal for any system of ethics; less obvious is how we define good, how we quantify it, and which people are included in the equation.

Utilitarianism is used as a shorthand since it is the more commonly recognized term. Those who seek philosophical precision will recognize the ethical perspective described here as Consequentialism, or more specifically a version of State Consequentialism, where ethics are considered in terms of how they would impact the happiness of a whole society if they were enforced through the legal system or via social norms.

What is Good?

Good is the word we use to describe the things that benefit our survival. Even those that don't believe in evolution still universally share a concept of what constitutes good that is exclusively pro-survival.

The mechanism that evolution has instilled in our brains in order to tell us what is good and what is bad are our emotions. We feel pleasure when we eat or have sex, we feel pain when we or those we love are injured. When sustained over time, a good ratio of pleasure to pain results in happiness and well-being.

The purpose of pleasure is to encourage your brain to do more of whatever it was doing that led to the pleasure, and the purpose of pain is to discourage.

It can therefore be concluded logically that evolution has formed our brains to direct our bodies to seek greater pleasure while minimizing pain, which happens to be the goal of Utilitarianism. It empirically matches the actual ethical calculations our brains make when our neurons decide on a course of action.

Happiness is therefore the measure of good. Beyond just pleasure or joy, which can be fleeting and result in greater harms when pursued shortsightedly, happiness and well-being represent long-term, sustained positive emotional states when referred to in this wiki, and have empirically been shown to be our goal in life.

You Can't Measure Happiness

A common objection is to dismiss the idea that happiness can be measured. But there are a number of ways to measure happiness and while they may not perfectly capture the state of mind of any individual, they do give good aggregate results that say whether a large population is able to achieve satisfaction in life.

Therefore, an ethical utilitarian society would seek to constantly increase the measured aggregate happiness in its people and should actively pursue this goal directly, rather than through proxy measurements such as economic activity and GDP.

What If Hurting This Guy Makes 2 People Happy?

This is the most common hypothetical counterexample that entirely too many people see as disproving utilitarianism. However, like many moral hypotheticals, it posits a situation that is fundamentally impossible given the way our brains have evolved and how societies work.

Humans are empathetic, social creatures. There is no way for us to harm another human without emotional repercussions on ourselves. Only a psychopath can blithely torture or kill another human and not feel their pain. And, being a psychopath, they're probably not feeling very happy either. So that's no way to go.

Then you must also consider the ramifications of implementing this hypothetical within the legal framework of a society. Let's say you think that killing and taking the organs from one drifter in order to save 5 parents of young children and prevent them from being orphaned yields a net positive in happiness. The drifter has no family or friends and won't be missed by anyone, while the parents who were saved will go on to have fulfilling lives, and their kids won't end up in foster care. Within this superficial closed system, it appears that utilitarianism would support this choice.

However, if it is right for one person to kill a drifter for their organs if it saves a few lives, then it has to be right for everyone to do it. And what does a society look like where it is legal to harvest the organs of the unhoused? Not like one that anyone would actually want to live in. Because it wouldn't actually make us happy, it would be morbid, fearful, and lawless.

Any society that benefits from harming others, through forced labor, unfair taxation, rent-seeking, theft, etc. will never be as happy as one that treats all people fairly and minimizes the burden of obligations that contradict a person's desires and self-determination.

While the hypotheticals posit more people winning than losing, historically these unfair arrangements have always been pyramid shaped, with the beneficiaries at the top and the people being harmed at the bottom. And a billionaire on ecstasy at a yacht party is not sufficiently happy to mathematically make up for the desperation and toil of the thousands of workers who make that lifestyle possible.

Whose Happiness Counts?

People primarily want to benefit their in-group. Not necessarily at the expense of others, but usually in favor of them. The closer the in-group the more we favor them, with family taking priority over community, nation, and larger cultures. This prioritization makes sense in terms of our personal life choices as well as an evolutionary perspective. But when considering universal ethics you need to use a universal in-group. In other words, if you don't consider the happiness of everyone your system is inherently unjust.

"Forward thinking" crypto-bros think that they can cleverly circumvent the need to care about the living by contributing to the theoretical happiness of future people. This is an ethical cop-out used to justify selfish behavior, such as using Effective Altruism as an excuse to steal billions of dollars in a cryto pyramid scheme. Someday you plan to use it for charity, so the more money you get now the more good you can do later!

When calculating the utilitarian benefit of any action, the tangible effects on the living should be prioritized significantly above the needs of any theoretical humans, or ones you may theoretically help in the future once you get rich. While we can't ignore our impact on future generations, we cannot prioritize their needs over our own either. Part of our happiness depends on knowing we are leaving a better world for our children, but a better world is always one that helps the living first. Just like in an airplane, put on your own oxygen mask first before helping your children with theirs. Especially if they haven't been born yet.

There is also no justification for harming the living in order to benefit the theoretical. To do so only creates a world in which such harm is acceptable, and that will always be a drag on happiness. The living may choose to make sacrifices for their children or future generations they may never know, but they must do so freely and without coercion.

Prioritizing the needs of the living over the theoretical is also what makes abortion an ethical choice whenever it is desired. Quality of Life is the most important pursuit.

Extra People is Not Extra Happy

Some math wizards point out that if you are adding up all the happiness in society, then adding to the population would make number go higher. This is not how Quality of Life works and you know it.

There Are No Infinities

The notion of a moral trump card is equivalent to having an infinite moral value on some particular action, which leads to undesirable outcomes. Read More.

Temptation and Delayed Gratification

Much of morality is centered around the concepts of avoiding temptation and its corollary delayed gratification. These topics are discussed in greater detail on those pages.

Can a TV Show Explain It?

If Doug Forcett Wrote a Wiki Instead

For those that prefer to get their ethics from sitcoms, it's basically the point system from The Good Place. The show is actually a robust introductory course in ethics and is truly a good place to start learning about this subject if the idea of utilitarianism is new to you.

The Good Place: How Afterlife Points are Assigned
Philosophy Crash Course - Utilitarianism

Can You Sing a Song About It?

Apparently you can. If this wiki has proven one thing it's that you can sing a song about anything.

Spoon - Utilitarian