- 6 months ago
In this episode, I examine the complexities of moral philosophy, focusing on the concept of evil and its relationship to intentional harm. I challenge simplified definitions and highlight the balance between short-term pain and long-term benefit through various examples, including medical procedures and parental discipline.
The discussion critiques the "do no harm" principle, revealing its potential to justify harmful actions for perceived greater goods. By addressing moral disputes and emphasizing the importance of objective guidelines, I advocate for clarity in moral systems. I conclude by sharing personal insights on navigating moral dilemmas, aiming to equip listeners with a framework for ethical decision-making.
FOLLOW ME ON X! https://x.com/StefanMolyneux
GET MY NEW BOOK 'PEACEFUL PARENTING', THE INTERACTIVE PEACEFUL PARENTING AI, AND THE FULL AUDIOBOOK!
https://peacefulparenting.com/
Join the PREMIUM philosophy community on the web for free!
Subscribers get 12 HOURS on the "Truth About the French Revolution," multiple interactive multi-lingual philosophy AIs trained on thousands of hours of my material - as well as AIs for Real-Time Relationships, Bitcoin, Peaceful Parenting, and Call-In Shows!
You also receive private livestreams, HUNDREDS of exclusive premium shows, early release podcasts, the 22 Part History of Philosophers series and much more!
See you soon!
https://freedomain.locals.com/support/promo/UPB2025
The discussion critiques the "do no harm" principle, revealing its potential to justify harmful actions for perceived greater goods. By addressing moral disputes and emphasizing the importance of objective guidelines, I advocate for clarity in moral systems. I conclude by sharing personal insights on navigating moral dilemmas, aiming to equip listeners with a framework for ethical decision-making.
FOLLOW ME ON X! https://x.com/StefanMolyneux
GET MY NEW BOOK 'PEACEFUL PARENTING', THE INTERACTIVE PEACEFUL PARENTING AI, AND THE FULL AUDIOBOOK!
https://peacefulparenting.com/
Join the PREMIUM philosophy community on the web for free!
Subscribers get 12 HOURS on the "Truth About the French Revolution," multiple interactive multi-lingual philosophy AIs trained on thousands of hours of my material - as well as AIs for Real-Time Relationships, Bitcoin, Peaceful Parenting, and Call-In Shows!
You also receive private livestreams, HUNDREDS of exclusive premium shows, early release podcasts, the 22 Part History of Philosophers series and much more!
See you soon!
https://freedomain.locals.com/support/promo/UPB2025
Category
📚
LearningTranscript
00:00Well, good morning, everybody. Stefan Molyneux from Freedomain, freedomain.com slash donate.
00:04Pinch-punch first day of the month. We are talking about August the 1st, 2025.
00:12Hope you're having a great day. I wanted to tie up some loose ends in my brain,
00:17and you might as well bear witness. I think it'll do some good for the world.
00:20But yesterday, I did a show where we talked about the origins of evil,
00:27and people were talking about evil being the infliction of intentional harm,
00:35the infliction of intentional harm to others. Now, it's funny, you know, because I like to think
00:43of myself as a fairly rational guy, but I'm telling you, a lot of it is just gut. Now,
00:50the gut doesn't prove anything, but my gut is like, that is not, that can't be right. That's
00:58not satisfying. And not to compare myself, of course, to illustrious folks, but I always remember
01:05the story of Frank Sinatra. And Frank Sinatra would be like singing with a full orchestra.
01:11I don't know, like 80 instruments or whatever, right? And Frank Sinatra would be able to
01:18pick out one, say, bassoon that was a little off. Oh, oh, oh, I think we've got a little stranger in
01:25there. And I've always loved that story. I mean, any sufficiently advanced technology is
01:31indistinguishable from magic, and any sufficiently advanced skill seems indistinguishable from psychic
01:38abilities. So, of course, I have been studying and debating and reasoning for over 40, over 40 years
01:49now. And it gives me good instincts about this stuff. And yesterday, my instincts were going off
01:59full tilt boogie. And this argument about the intentional infliction of harm, I pushed back
02:09against it by providing some counter-examples, where you have to say harm becomes complicated to
02:15define, right? And the reason that it becomes complicated to define is because, obviously,
02:22if you need an emergency tracheotomy, you're pretty happy if there's a doctor around to do it.
02:27And the Heimlich maneuver, I actually interviewed the woman, the daughter of Dr. Heimlich, Janet
02:34Heimlich, many years ago. And if you, somebody needs to give you the Heimlich maneuver because
02:41you're choking and they break your rib, well, that's the intentional infliction of harm, but it's
02:45not sadistic, and it's with the larger goal and good. So then you have to balance present harms
02:50and future harms. It all becomes very complicated. And things that are very complicated
02:56become impossible to manage from a moral standpoint, right? And we, you know, I always go back to this,
03:05we expect maybe two, three, four-year-old little kids to be moral, right? We expect them to be moral.
03:12And if we expect kids to be moral, it can't be that it's so complicated. And a coach who's pushing
03:22you to run faster, harder, personal trainer who's telling you to lift more weights or whatever is
03:28definitely causing you harm. Ah, yes, but in the long run, all that kind of stuff, right? Now, what
03:35does it mean by the long run? When is the balance? All of these things are very complicated. And I think
03:41they would have to do with aesthetically preferable behavior, but it's really tough to say, well,
03:47you got to find just the right calibration and balance between short-term pain and long-term gain
03:55and so on, right? And of course, these claims don't exist outside of assertion. So, for instance,
04:04a bully could say, well, the reason I'm bullying this kid is to toughen him up, right? That's why
04:12this kid, I'm bullying him so I can toughen him up, right? Well, does it happen sometimes that kids who
04:21are bullied toughen up? Yes, it does. So, can a bully then say, I'm trying to toughen him up? Sure.
04:28Is it true sometimes? Yeah. Does that make bullying okay or moral or good? No. No, that doesn't. That
04:37doesn't. We understand that doesn't work, right? Sometimes if kids are really irresponsible with
04:42their property, the parents will take it away. And this is to teach the kid to be more careful or to
04:50treat his possessions more carefully or some things like that, right? Can a thief then say, well, I'm just
04:56teaching him to respect his property. I mean, he left something. He left his bike out on the front
05:00lawn. He doesn't ever put it away. I'm taking it away from him to teach him a lesson. See, these claims
05:05become virtually impossible to adjudicate. I mean, obviously, there are clear ones where it's bad and
05:13there are clear ones where it's good, but there's a lot of gray areas. And the reason we need principles
05:18is there are a lot of gray areas in life. Most of the decisions that we make are not
05:25world-spanning Genghis Khan, good or evil decisions. They're little decisions to tell
05:30the truth or to hold our tongue, to confront someone or to back away, to do something shady
05:38at work or not to do something shady at work. All of these, those are the decisions that we
05:43normally have to make. We don't usually have the decision to go to war or not, but just to
05:48tell the truth in the public square is the big decision that we have to make. So, anyone
05:55can claim that what they're doing is for the greater good, right? I mean, the communists
06:00do this all the time. And the communist philosophy is justifying the use of violence in order to
06:06secure a happy, productive, peaceful, wealthy world for the proletariat. So, the communists
06:13submit a famous statement, you cannot make an omelette without breaking a few eggs. You cannot
06:18achieve the good in the world without the use of violence. And of course, the eggs are broken,
06:28the omelette never shows up. But the sadists really enjoy breaking the eggs, for sure.
06:32So, it's, I'm always suspicious of, and this is true within myself, no, not of others. I'm always
06:38suspicious when the examples are obvious, because morality is about the non-obvious examples.
06:46There's no nutritional book that says, don't eat arsenic and gravel, because obviously you
06:52shouldn't eat arsenic or gravel, so you don't need a book for that. We need a book for the
06:56challenging cases, the non-obvious cases. So, when people say, well, evil is when you intentionally
07:03inflict harm on someone. I mean, that sounds good. But then, of course, you bring in the,
07:08well, you can harm people in the short run for the greater good in the long run.
07:12I mean, totalitarian regimes that euthanize people are saying, well, we have to do this for the sake of
07:20preserving our scarce medical resources for others, and so on, right? COVID was a lot about
07:28short-term sacrifices for the greater good in the long term, which, you know, in general,
07:32very often did not turn out to be the case. So, anyone can make that claim. And since the proof
07:38of the claim resides in the future, how do you deny it now? Right? So, a bully who says,
07:44I'm bullying this kid to toughen him up, and coaches do it all the time. Coaches push kids,
07:51sometimes make them cry, and they say, but it'll toughen them up in the long run.
07:54It worked for me. It'll work for them. So, the problem is that all moral claims
08:01that require the future to be validated can never be proved in the present. That was an
08:07awkward way to put it. Let me take another run at that sentence. It is impossible to prove
08:12the current validity of moral claims when the proof exists only in the future. The future isn't here
08:19yet. So, is this right, or is this wrong? Is this right, or is this wrong? Well, if the proof of the
08:29rightness or wrongness of the action lies in the future, then you can't ever have moral certainty
08:35in the present. Now, UPB, abstract principles, give you moral certainty. The initiation of force,
08:43respect for property, no rape, theft, assault, and murder, these are all validated universally,
08:47and therefore, they're not dependent upon time, whereas the intentional infliction of harm
08:55requires a couple of things. It requires that you read someone's mind, because everybody who will
09:02claim, oh, I didn't mean to, that's what you always hear, right? Oh, I didn't mean to. I didn't mean
09:07it was an accident, right? I mean, even somebody who, like, it's pretty wild, like, even people on X,
09:13they will, you know, say that I'm wrong and dumb and things like that, right? And I sort of call
09:19them out for their rudeness, like, I mean, I didn't mean to offend you, I was just being blunt,
09:23right? So, intentions are very difficult to read. So, all moral systems that rely upon mind reading
09:33cannot be validated, objectively. So, a kid who takes another kid's toy, and the other kid complains,
09:43runs to the teacher, teacher goes to the kid who took the toy and says, why did you take the toy?
09:47He says, I thought, I thought he wanted me to have it. I thought he was done with it. I thought he
09:50wanted me to have it. Okay, how do you adjudicate that? I mean, it's true, of course, that you could
09:56make that claim, even under UPB. But that is a specific instance that needs to be adjudicated,
10:06but UPB does not require mind reading. It does not require an analysis of intentions. I mean,
10:14as a theory, right? Sometimes you have to, I mean, the difference between first-degree murder and,
10:20I don't know, negligent manslaughter or something like that, negligent homicide. No,
10:24homicide is just any death, right? Anyway, some sort of manslaughter or negligent, whatever,
10:30right? The difference is that one is willed, the other one is not willed, but results from
10:35negligence. So, there's a certain amount of intention there. So, you do have to, like,
10:38murder is wrong, but first-degree versus third-degree crime of passion versus, you know,
10:43you hired a hitman or whatever, you planned that a week in advance. Those are different things.
10:48So, UPB, in the theory, does not require calculating effects in the future as a theory. It does not
10:58require calculating effects in the future, and UPB does not require an analysis of intention.
11:06Now, specific adjudications of UPB, well, I'm trying to think. So, specific, so, if you steal something
11:16from someone's yard and you thought it was, that they didn't want you to have it, like, maybe there
11:21was a sign next to a couch that says, take me, and then, like, 15 feet away, there was a bike,
11:29and you took the bike thinking that both things were being offered, right? So, there could be
11:33something like that. Again, pretty rare stuff, but it could, you know, you could, but the theory
11:37doesn't require that. The adjudication of a particular instance may require that. The theory
11:43doesn't require an analysis of intentionality, and it doesn't require the guessing of future
11:52effects. Because if your moral theory requires that which can be lied about, you don't have an
12:01subjective moral theory, and I think that's what I'm really talking about here. I mean,
12:05this is the classic. He said, she said stuff regarding non-injurious rape. So, I mean, sorry
12:13to discuss such an ugly subject, but rape is the unambiguous moral wrong. Stealing, you could
12:20be stealing something back. Assault could be self-defense. Murder could also be self-defense,
12:26but, or killing could be self-defense. But a rapist, there's no ambiguity. It's like,
12:34it's just, it's evil in and of itself. There's no self-defense rape, right? So, with regards
12:42to rape, the big challenge societies have always faced is the he-said-she-said dilemma, where
12:48she voluntarily went to the man's house. She had some, a couple of drinks, but not enough to
12:54be incapacitated. They had sexual activity. And then the next, and then there's no injuries of
13:01any kind. She stays over. She leaves the next morning. And then later, she says that the sex
13:09was non-consensual. But there are no witnesses, there are no injuries, and there's no evidence.
13:14And this is a horrible situation. I mean, honestly, horrible, because there are certainly times where
13:20the woman really felt bullied or pressured, or the man made some kind of threat that was not
13:23recorded, or something like that, right? So, there's absolutely times where it could be
13:29non-consensual, and there are other times where it, the man had every reason to believe, or reasonable
13:36reasons to believe that it was consensual, but later, there's a withdrawal of the consent, and it's just
13:41a, it's a big, ugly, difficult, impossible, really, to adjudicate kind of mess. So, what do you do?
13:50Well, of course, society, by not allowing men and women to be alone in those kinds of situations,
13:56to, for them to be married, and, and so on, that is really all that society can do, because you,
14:04you can't adjudicate these kinds of things. That's the he said, she said dilemma. And,
14:11so, if you have a moral system that requires mind-reading, or the guessing of future effects,
14:19which is the intentional infliction of harm, then you have a problem, because you have a non-objective,
14:28non-universal, non-rational, because it's relying on intentions in future guesswork.
14:35The theory, right? The theory. Now, again, I'm not, UPB does not adjudicate individual instances,
14:42right? UPB says that sealing can never be universally preferable behavior. UPB is the
14:47respect for property rights. It doesn't adjudicate every complex land dispute, or
14:51neighbor dispute over a tree, that those things would have to be adjudicated. So, but the adjudication,
14:58of course, adjudication exists because of ambiguity as a whole, but the moral theory cannot itself
15:04contain ambiguity. And the moral theory itself cannot contain, which is why UPB doesn't contain
15:12ambiguity. It doesn't require mind-reading. It doesn't require, the theory does not require
15:17the guessing of future consequences, right? I mean, to take an example that, you know,
15:24sounds extreme, but that's all right. Somebody cuts somebody else's throat in a restaurant and then
15:30says, oh, I thought he was choking. I heard him cough. I thought he was choking. I mean, that's
15:37a challenge, right? Let's say that the guy is a doctor or whatever, right? It's tough. You know,
15:43but then as I was trying to give him the tracheotomy, he writhed and, you know, the knife went in. Like,
15:49that's complicated stuff. So that's somebody who's intentionally inflicting harm with the claim
15:55that future harm will be alleviated. Like, you won't choke to death because I'm giving him a
16:02tracheotomy or something like that, right? So, or, you know, some guy gives a woman,
16:09breaks a woman's ribs, giving her the Heimlich maneuver and then says, well, I thought she was
16:12choking. And she said, like, I had a mild cough. What are you doing, right? So, UPB says, of course,
16:19that assault the initiation of force is absolutely wrong without regard to future results, without
16:26regard to mind-reading intentionality or anything like that. These things are wrong. The do-no-harm
16:33theory runs into complications even in the theory, not just the adjudication of individual disputes or
16:41questions. If the theory is ambiguous and requires facts not in evidence, right, then the theory is
16:49The theory can't work. The theory needs to be absolute and then the adjudication deals with
16:55complex cases. So, the law needs to say that murder is wrong. Every court trial is there to
17:01adjudicate individual charges of murder. The law and the court. And the court deals with all the complex
17:08requirements for proof beyond a reasonable doubt. So, if the theory requires mind-reading
17:18and balancing between present and future harms, then the theory itself is ambiguous and requires facts
17:27not in evidence. So, that's a problem. So, there's a difference between the theory and the
17:35adjudication between the law and the trial. And if the trial deals with ambiguity, the trial deals with,
17:44you know, facts not in evidence, the trial, you know, rejects hearsay, requires cross-examination,
17:49and is not absolute. I mean, if it was absolute, then, like, if there was video of the person
17:55actually killing, murdering the guy or whatever, and it was unambiguous, there wouldn't be a trial,
18:00right? I mean, almost certainly. Or if there was a trial, it'd be very short. So, ambiguity is for
18:08the adjudication to the trial, to unravel all of that. If the theory contains ambiguity, then you
18:14can't know what is right or wrong in principle. In other words, every examination of moral activity
18:23is a trial, which is like trying to build a bridge through trial and error, rather than having principles
18:31of engineering that are absolute. So, you have principles of engineering and physics that are absolute,
18:36and then you have building a bridge, which, you know, you don't want to over-engineer it, make it too strong
18:41as a waste of resources, you don't want to make it too weak because then it collapses. So, that's the
18:46tension and the ambiguity. What's the right amount of time and money and energy to spend
18:53on building a bridge? I don't know. It depends. Is it a bicycle bridge, or is it a bridge that has
19:00to take a hundred trucks? All of these things are different, right? It's easy to over-engineer
19:05and to build a bridge that is too strong in the waste resources, and it is easy to under-engineer.
19:12So, the physical building of a bridge is, you know, complex and ambiguous, and there isn't a final
19:19right answer. There's just a right-ish answer, a good enough-ish answer. Like, you can't build a bridge
19:24and say, well, this bridge is objectively 0.5% over-engineered or under-engineered. You can't say that.
19:31You just come up with a thing, right? You come up with a bridge. Now, the laws of physics are universal
19:36and absolute. The building of the bridge is, to some degree, subjective and ambiguous. What is
19:45the right strength of the bridge, right? There's a... In Toronto, there's a CN Tower. CN Tower has a
19:52glass floor, which can take the weight of four hippopotami. Is that over-engineered or under-engineered?
19:59I imagine it's quite over-engineered to be on the safe side. Could you get away with something that
20:06didn't take four hippopotami, but only 3.999 hippopotami? Yeah, probably. But it's a nice
20:12round number. So, that's what they do. So, you need objective universal rules, and then
20:19sometimes the applications of those rules is going to have some subjectivity and some complexity and
20:26some ambiguity and you just kind of make your best choice. But if your theory is kind of a
20:35ambiguous, complex, multifaceted, rule-of-thumb squint down the line, bunch of guesswork, then
20:43you don't have a moral theory. And that's why feelings and consequentialist-based and mind-reading
20:50intentionality-based, quote, ethics aren't ethics. It's confusing engineering with physics.
20:58So, if there's a medicine for a particular illness, then sometimes people might need more
21:04or less of that medicine based upon their height, their weight, their size, or whatever, right?
21:08So, this medicine is good for this illness. It's the absolute. I mean, without allergies and
21:15stuff like that, just in general. But how much of a dose you give, that is a different matter.
21:22So, moral theories need to be absolute, which means self-contained, which means not ambiguous,
21:28not consequentialist. Because if you say something is good or bad based upon its results in 6 months,
21:363 months, 5 years, 10 years, or whatever, if you say that something is good based upon
21:42consequences in the future, then you cannot say, at the moment, whether it is good or not.
21:49And, of course, morality is about the future. Morality is about making decisions in the present
21:54to have integrity and virtue in the future. But if you cannot make decisions about morality
22:00in the present because you have to wait for the consequences in the future,
22:04then you have no standard by which you can morally judge your actions.
22:08You know, as we were talking about in the show last night, if I break up with some woman,
22:15then she's sad, she's going to cry, and she could be sad for months. So, am I intentionally
22:22inflicting harm? Well, I mean, my purpose is not to inflict harm, but harm is going to be the
22:28inevitable consequences of me withdrawing my affections. Of course, the same would happen
22:32in reverse if I wanted the relationship to continue, and she didn't, and so on, right?
22:37So, your moral theory needs to be absolute. It cannot rely on things that people can lie about.
22:44I didn't mean to, it wasn't my intention, and it can't rely on facts not in evidence,
22:48such as the effects weeks or months or years or decades or centuries down the road.
22:53Also, the issue is the seen versus the unseen, which is, people see, like, if you look at the
22:59deportations in the United States or other places, then people see, you know, crying families being
23:05hustled to the border or removed from the country, and that's very vivid, that harm. However, when you
23:11have a bunch of extra people in the country, the price of housing is higher, access to health care
23:17is diminished, and so on, right? And traffic is worse, and people speed because they're frustrated,
23:23and then there's car crashes. Like, there's a whole bunch of harm that happens, and the problem is
23:28the seen versus the unseen, is that if you're going to make decisions on what appears to be
23:32inflicting harm, then you are going to be seeing the most obvious harms, but not the subtle,
23:39unrecorded, abstract harms, which are very real, but can't really be traced.
23:43That's the old argument that if the government spends $5 million to create 50 jobs, then the
23:49people who get those jobs are very happy, but the 100 people who didn't get jobs because
23:53the $5 million was taken out of the economy, they don't even know that they lost their jobs,
23:57right? So, I mean, this is the seen versus the unseen was big under COVID, right? In that
24:02there were people who lost their jobs, wasn't really recorded, there were people who lost
24:11their businesses, weren't really recorded, there were people who didn't go to the doctor
24:15or the hospital, that wasn't really recorded, and so that's all sort of scattered and diminished,
24:20and then there were people who died of COVID, and those were recorded and vivid. And so,
24:25if you have a moral system, based upon not doing harm, then obvious harms will be opposed by that
24:33moral system, which will often create ripple harms that are not detected by the system,
24:37which are actually worse than the initial harms, right? So, yeah, it doesn't, it doesn't work.
24:43It doesn't work as a moral system, and moral system cannot have within it things which cannot
24:50be known at the time. A moral system cannot have elements of decision-making that cannot be known
24:59at the time, and intentionality cannot be known, because people can lie about it and fake it. And
25:06the moment that intentionality becomes a big deal in a moral system, then people would just fake
25:10intentionality to make it impossible to prove that they had malign intent, right? I mean, if you look at
25:16the libel laws in the United States, it says, well, you know, if you're a public figure, people can say
25:20whatever they want about you, as long as it's not done with actual malice or a reckless disregard for
25:24the truth, and everybody knows that, who's in the media, so they just make sure that they never regard,
25:29they don't write themselves down, sound saying, well, I have a reckless disregard for the truth of
25:32this, or I have actual malice towards this person, right? They don't do anything like that at all,
25:36and so it becomes impossible, really, to protect yourself, in many instances, as
25:40a public figure, according to American defamation laws. So, if intentionality becomes important when
25:46judging a harm, then people will simply say to all of their friends, oh, I hope to really do good
25:50with this, and they write things in their diary saying, oh, I really want to do good with this,
25:54and then they send emails and say, oh, I really want to do good with this, and they create a whole
25:59trail saying that they really want to do good with this, and then they have protection against the
26:04charge that they were intentionally inflicting harm. So, I don't know, I've got a paper trail a mile wide, and
26:10a light year long about how I wanted to do good, right? So, you can just get around it that way,
26:14and you can't oppose bullying, because bullying toughens up some kids, some people, right?
26:21Somebody could say, oh, I didn't pay my employees, because I really want them to become entrepreneurs,
26:27and I remember when I wasn't paid for a job when I was younger, I became an entrepreneur,
26:33which has been great for me, so I just decided not to pay my employees, because I want them to become
26:37entrepreneurs, because it was the best thing that ever happened to me. Yeah, good luck with that.
26:41So, a moral system cannot contain a requirement for facts impossible to know ahead of time,
26:48and of course, the future result or future effect of a moral choice by definition cannot be known in
26:54the present. So, mind reading, you can't know, you can't do it, and so can't be part of your moral
27:00system, and future effects, you can't know by definition in the present, therefore can't be part
27:05of your moral system, which is why the do-no-harm stuff does not satisfy any of the requirements
27:09of a moral system, and will be subject to enormous amounts of manipulation and corruption.
27:15And also, of course, telling the truth causes people harm. An obvious example is a doctor who
27:23tells you that you have a disease, causes you emotional harm, okay, now you can get it treated,
27:29maybe you can get better, but it does cause harm. And again, you're looking at the future
27:32consequences that are positive down the road. And of course, sometimes the doctor tells you
27:37that you have a disease, that you can't get cured, you can't get better.
27:39However, and if you tell someone who told a lie, you told a lie, that causes them harm.
27:48And if a cop catches a criminal, that causes the criminal harm. And if someone believes that
27:56they're a great singer, and you tell them that they're not a great singer, you know, this sort
28:00of Simon Cowell stuff, if someone thinks they're a great singer, and you tell them they're not a great
28:04singer, they get very upset and unhappy, and that causes them a harm, and it might break
28:09their heart for months or maybe even years. If there are 100 actors up for a role, only one actor
28:16gets chosen, the other 99, who aren't as good or appropriate to the role, experience a harm.
28:23And I mean, honestly, this just can go on and on. If someone believes that all humanity is a blank
28:32slate made out of silly putty that society can mold into whatever it wants, if people,
28:39believe that, and then you prove to them that human beings are not a blank slate, that there are
28:45built-in capacities that vary between people that they cannot surmount, then those people get upset.
28:54So, sometimes you are upset about something, and it turns out that it was not a bad thing,
29:02but in fact a good thing, right? I mean, you're mad because you miss a plane leaving, you're late,
29:09you miss the plane leaving, and the plane crashes, right? So, you think, oh my gosh, I'm harmed,
29:14my interests are harmed, things are bad, things are negative, and then you find out that things are
29:19positive. You get fired from a job, you're very unhappy, but it turns out that because you got fired
29:26from the job, you end up starting your own business, and that ends up being more satisfying
29:31and successful, and that is how you feel better, right? I certainly had a lot of things in my life,
29:40I really work hard, and pretty successfully at this point, to not judge whether things are good or bad.
29:47It was negative for me to be deplatformed, but it opened up a whole bunch of other things that were
29:52very positive for me, so am I going to say that's bad? Well, it's kind of hard to say that for me.
30:02Every relationship that I had that didn't work out was negative as a whole, because you want your
30:09relationships to work out, so it was negative, but I would trade all of those relationships as I guess
30:16I did for the wonderful marriage I have with my wife, so although it was negative at the time,
30:22it turned out to be positive. You know, when I left theater school, I was unhappy because I loved
30:30the acting world and so on, but then it turned out to be, it put me into a much better and happier
30:37and more productive direction. So, I mean, this idea that in the infliction of harm and so on, it requires
30:44all of this mind reading, it requires this guess about what's going to happen in the future,
30:50and it is just a form of hedonism, right? It is a form of hedonism, because if you say, well, causing
30:57people harm, the intentional infliction of harm, again, intentional is mind reading, and people can
31:04just lie about it. I didn't mean to, so you don't have any objective moral standard. People could just
31:08wriggle out of the moral standard. You can't wriggle out of, stealing can never be universally
31:12preferable behavior. Like, you can never wriggle out of that. That is an absolute. So, if you're giving
31:19people all of these get-out-of-jail-free cards and all of this objectivism, you can't say that
31:25you have a moral standard. It's hedonism. And how does it, how does it rope in people who don't agree
31:33with you? This is the most fundamental thing, right? How do you deal with people who don't agree?
31:38Well, a scientist, a scientific convention does not invite people who reject science,
31:46who are opposed to science, right? That is not what they do. They won't invite you to that.
31:54So, UPB, because it is airtight logic that a child can understand, right? Just understand. UPB is an
32:03airtight logic that children can understand. And therefore, the only way you can reject UPB
32:10is to reject logic, reality, language, and embrace rank hypocrisy. Because you can only reject UPB
32:20by accepting UPB. It is universally preferable behavior. To reject universally preferable behavior
32:25is a ridiculous self-contradiction. And so, you would be revealed as incredibly emotionally immature.
32:33you would be revealed as manipulative. You would be revealed as maybe insane, either epistemologically
32:43or morally. And you would just be rejected, right? You would just be rejected from, you would be
32:52rejected from any rational debate. And people would have no problem condemning you for your, well,
32:58mental issues, mental problems, immaturity, hypocrisy, manipulation. I mean, you would just be
33:03kicked out and dumped from all of that, right? So, and society having, like, because society,
33:11including children upwards, would accept UPB, people would have no problem if you acted to violate UPB
33:17using ostracism or coercion against you, right? That this would be fully accepted and fully understood.
33:24Like, people who advocate for, oh, let's bring back, you know, they would say something crazy like,
33:32oh, let's bring back slavery, right? Those people would be ostracized from decent or civil society.
33:40They wouldn't be invited to conferences. They would never achieve any particular artistic or social
33:44or business or political success. And so, this would be, that's how it's sort of, that's how it's
33:51dealt with. That's how it's run. So, you, it's just a form of hedonism, which is, well, I don't like to
33:57intentionally inflict emotional harm, which I don't either. It's not like I wake up in the morning
34:02and say, ooh, who can I harm today? But the way that I view it is that if people have irrational,
34:12anti-rational thoughts in their mind, that means they can't be happy and they need to be confronted
34:17on those anti-rational thoughts so that they can be happy. And in the same way that if I'm at the gym
34:26and somebody is exercising in a way that is going to injure them for sure, right? They're just going
34:33to hurt themselves. Then I might, I would feel pretty honor-bound to say, you know, hey, you shouldn't do
34:40it that way because, you know, here's what's going to, going to happen, right? Like in the, when I was
34:45doing sort of labor, physical labor, if somebody was like lifting, not with their knees, but with
34:50their back, right? You know, that sort of, you just, you just come up like one of those dipping birds.
34:56I would say, you should lift with your knees, not with your back. That's going to hurt your back.
35:00You know, that kind of stuff, right? If somebody were to say to me, I haven't, I've never jogged
35:06before, but I'm going to run a marathon this weekend. I would say, oh, that's a, that's a bad
35:11idea. You're going to hurt yourself. You're going to, you know, really have a, have a bad time because
35:17you need to work your way up to that. You can't just go and run 26 miles and change without any
35:22preparation. And because I'm, I want to help people. I want to help people. Particularly, of course,
35:29having seen in my family, having seen how mysticism wrecks people's lives and seeing in my own life
35:39and the lives of other people that I know, how my rationality has helped save and, and create great,
35:49wonderful, happy lives and all of that. I mean, if, if you have the cure to an illness that afflicts
35:56most people, then why wouldn't you want to spread that cure? I mean, if, if people are in, you know,
36:05chronic pain and, and unhappiness and anger and, you know, just discontented and, and frustrated and
36:11tense and can't fall in love and, and you have a cure for that, which is free at all the cost is
36:17your pride, right? The only thing that the truth costs you is your pride. Well, then you, you should,
36:22you should spread that. I mean, I want to hoard it for myself. I mean, for both,
36:25selfless and selfish reasons. So yeah, it's just a form of hedonism. Let's say somebody does
36:32like or prefer inflicting harm on people. Well, how do you prove to them that they're wrong?
36:41That's the big question. I think that's pretty good. I think we're 80% of the way there. I'm sure
36:45there's a little bit more, but that's, I have a meeting now, so I'm going to stop. Look forward to
36:51your feedback. Thank you for this great conversation. Hugely appreciate it.
36:55Freedomain.com slash tonight. Love you guys. Bye.
Comments