Philosophy is often defined as "thinking about thinking" or the "pursuit of knowledge for its own sake". However, these definitons do not accurately reflect the whole breadth of the subject as it is very difficult to pin down the exact meaning of philosophy in one sentence. Due to the incredible depth of the subject, I will be focusing on a few key areas in the field of philosophy. I aim to cover the thoughts of classical philosophers that I find most interesting, ranging from Socrates all the way to the Stoics. On the other hand, I will be writing about a fairly modern direction known as Analytical Philosophy that aims at breaking down supposedly complex concepts into their simpler constituents. Common sentiment is that philosophy is an accumulation of unproductive and abstract contemplations, however - apart from placing interesting ideas into your brain - it provides a good way of learning to think more analytically and reason more effectively which can be useful in a variety of situations in various areas of life.
Recent developments in Virtual Reality have enabled it to be applied in many different fields such as video games, Robotics or cinema entertainment. The environment created by the VR computer already provides an almost lifelike experience. If one extrapolates the developments in VR at any rate of growth into the future, regardless of whether it be hundred years or tens of thousands of years, these games and simulations will eventually become indistinguishable from reality. So how do we know that this hasn’t happened yet, and we aren’t living in a simulation run by some posthuman civilization?
Over the past few decades, the improvement of Artificial Intelligence and computational power has left many philosophers ruminating about the nature of our reality. Nick Bostrom, a Swedish philosopher at the University of Oxford, has come up with a compelling line of reasoning that we are really, in fact, living in a computer simulation. This implies that your brain is also just part of that simulation and not actually “yours”.
For establishing this hypothesis, Bostrom first clarifies a few assumptions. One is that of the “substrate independence of the mind”, which essentially states that the mind could be implemented not only on biological neurons (like it is right now) but also on other computational sensors such as silicon-based processors and still be functional.
What’s happening inside of our brains is that there are functions and dynamic processes that are running on a certain type of hardware, but in principle, if you could take these same functions and processes and run it on some other hardware, like a computer that could exactly copy those functions and processes, you would get the same result and the same outcome – the mind. The mind is therefore not tied to our brain or certain atoms, rather it is a bundle of functions and processes.
An analogy of the substrate independence of the mind is that of a platform independent code. This code can be implemented on any computer and will still function as it was programmed to. Our computers right now are not powerful enough to run the computational processes that take place in our brain but ultimately, in the future, this computational power will be achieved.
Given this concept of a widely accepted substrate independence, it would be possible to upload a human mind on a sufficiently powerful computer. Even though our current knowledge doesn’t allow us to perform this mind-uploading, as we don’t have such powerful computers, there is no physical or material constraint preventing us to do so, which means that we only must overcome the technical gap to be able to implement minds onto computers.
Let’s suppose that you are in fact just a simulated mind in a simulated world. The simulation that is run by the computers of the posthuman civilization is indistinguishable from reality, as we know it, and therefore there is no possible way for you to find out whether you are in a simulation. The only way to find this out would be if the posthuman civilization decided to let us know that we were living in a simulation and would change this simulation in such a way that we were primed to find out. Perhaps a window informing you of the fact would appear in front if you or perhaps they would “upload” you into their world and would cover up this upload by creating the idea of death. So, before they want to upload, and therefore remove, a mind out of the simulation, they would construct a series of events leading to this person’s death to have a justified reason that that mind won’t be in the simulation anymore.
Now, having outlined some of the theoretical framework and getting you into the thought-provoking state of mind, we get to the gist of the simulation argument, which shows that at least one of the following three propositions must be accepted as true:
(1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small
(2) Almost no technologically mature civilization is interested in running computer simulations of minds like our own
(3) You are almost certainly in a simulation
Let us assume that proposition (1) is false. This implies that a large part of our species will eventually become technologically mature enough to run computer simulations. Further, if we suppose that (2) is also false, then a significant portion of the technologically mature civilization will run computer simulations of minds like ours. Given the enormous computational power that such a posthuman civilization will have, they will be able to simulate a huge number of minds. Hence, if both (1) and (2) are false, there will be an extremely large number of simulated minds like ours. The number of simulated minds would be a lot larger than the amount of non-simulated minds. Thus, simply by looking at the probability, you would have to conclude that you are most likely one of the simulated minds rather than one of the exceptional ones that are running on biological neurons.
If we argue that proposition (1) is true, then it must be true that there occurs a certain extinction event that would entirely wipe out all the civilizations before they become technologically mature. Or, the advanced civilizations develop a technology that inevitably turns out to be highly dangerous and destroys them. Also, not a very bright outlook for the future.
Lastly, if we consider proposition (2) to be true, then none of the sufficiently advanced civilizations are interested in running simulations of minds like ours. This is also an interesting outlook, as it would put a constraint on the future evolution of advanced intelligent life. Furthermore, until now, humanity has always taken steps into further developing itself and has never not pursued technological advancement due to lack of interest. A compelling reason would be needed for this to change in the posthuman civilization.
The most intriguing scenario is given by the truthfulness of proposition (3).
What if we really were living in a computer simulation and all your acquaintances are simply simulated minds being controlled by an external computer which in turn is being controlled by some posthuman species? How would knowing this change your behaviour in everyday life? Would you continue as before or make any drastic change?
Feel free to share your thoughts about this in the comment section below!
In recent years experimental philosophy has emerged as an exciting new approach to the study of people’s philosophical intuitions. Experimental philosophers apply the methods of the social and cognitive sciences to the study of philosophical cognition. Previously, philosophers only used to work with a priori justifications. These are statements that can be deduced by pure reasoning and are independent from experience. Consider this proposition: “It rained for 24 hours straight, therefore it must have also rained at 7:44 am”. This is something you know a priori as you can deduce it by logical reasoning. Experimental Philosophers on the other hand understood that they should go out and conduct studies in which they systematically vary certain factors and then show how varying those factors will influence people’s applications of certain concepts.
One of the first findings of experimental philosophy was the effect that moral judgements had on people’s understanding of a certain situation. Joshua Knobe discovered that people’s way understanding the world seemed to be suffused through with moral considerations.
For example, Knobe found that moral judgements influenced people’s intuitions about the concept of intentional actions. This is the concept that distinguishes between things that people do intentionally, like reading a book, and things that people do unintentionally, like, say biting their own tongue. So, the question is: How does this distinction work? How do people normally decide whether an action was intentional or unintentional?
When first thinking about this, it might seem straightforward. It must have to do something with the mental state of the person doing the action, whether he knew he was doing it and whether he wanted to do it in the first place. Knobe thought that there was more to this. He felt that maybe people’s moral judgements were playing a role in their ascriptions of intentionality. So, it’s not enough just to know what they wanted to do, or what they knew – you also have to make a judgement about whether what they are doing is something that is morally good or morally bad.
In order to test his hypothesis, Knobe ran a simple study where he assigned participants into one of two groups. Each group was separated and was then presented with a situation. Participants in the first group got presented with the following situation (“harm scenario”):
“The vice-president of the company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, and it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was harmed.”
And the associated question was: Did the chairman of the company harm the environment intentionally?
82% of the first group said that the chairman of the board harmed the environment intentionally. Again, one might think it has to do something with the mental state of the chairman – the fact that the chairman knew that he was going to harm the environment and had a choice but proceeded with the action anyway.
Let’s look at what happened to the second group. They were presented the exact same situation with one minor change. Instead of harming the environment, the new project was going to help the environment. So, group number two was presented with the following (“help scenario”):
“The vice-president of the company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, and it will also help the environment.” The chairman of the board answered, “I don’t care at all about helping the environment. I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was helped.”
And now the question was: Did the chairman help the environment intentionally?
Here the participants do not give the same response as group one. 77% of the second group stated that they felt that the chairman had helped the environment unintentionally.
Now you can notice the discrepancy – even though the mental state of the chairman was exactly the same in both scenarios the participants gave different responses in their ascriptions of intentionality of the chairman’s action. The only thing that had changed between the two scenarios was the moral behaviour of the chairman, or at least the moral judgement of the behaviour by the participants. In one case, the chairman is doing something bad and in the other case he is doing something good. Therefore, it seems that somehow moral judgements can affect the participants intuition about whether an action was intentional or not.
This phenomenon was confirmed over a variety of experiments and was labelled as the “Knobe effect”.
But what are some possible explanations for the Knobe effect?
Eduard Machery proposes a so called “trade-off hypothesis”. He thinks that most people interpret the harm scenario chairman case as one where the bad side effect is traded for some benefit, and we normally think such trades are done intentionally. Because there is a trade-off – a trade of a desired end along with a foreseen and bad consequence, and because we think of costs as being intentionally incurred, people judge the bad side effect to be brought about intentionally. Cases that involve good side effects are therefore not seen as intentional as there are no associated costs.
Machery tests the trade-off hypothesis with a pair of his own scenarios:
“Joe goes into a convenience store to buy a large drink. The cashier tells him that in doing so he gets a free cup. Joe doesn’t care about the free cup, he just wants their largest drink.”
Here, 55% of people responded that Joe did not obtain the free cup intentionally.
In the analogous case of the harm scenario, Joe wants to buy a large drink, but the price has gone up by a dollar. He doesn’t care about the dollar, as he just wants the largest drink. Here, 95% of the participants said that he paid the extra dollar intentionally.
Hence, in cases where there is a foreseen cost, people judge that it is brought about intentionally. But when there is an unintended, foreseen benefit, people tend to say that the side effect is not brought about intentionally.
The flaws of this theory are that the cases that Machery uses are problematic because it is not clear that they involve side effects. In the first case, the free cup Joe gets is not a side effect of buying a drink. Getting the free cup is not a separate event. Furthermore, in the second case, spending an extra dollar is not a side effect of getting a large drink. Joe spends the extra dollar as a means to the desired result, and thereby it is not a side effect.
Another possible explanation of the Knobe effect is the “Biasing Explanation” developed by Thomas Nadelhoffer. Nadelhoffer thinks that the perceived blameworthiness of the chairman fuels the asymmetry of the participants answers. In both cases, we naturally form negative impressions of the chairman, as he does not care about something that he should be caring about. For this reason, people do not want to praise the chairman for bringing about a good side effect, but they do want to blame him for bringing about a bad side effect.
Nadelhoff claims that negative affect generated in the chairman chases can explain the Knobe effect. He argues that the negative affect can bias one’s interpretations of the mental states of another person. He studied the judgments of couples that have fought. Because of the negative affect that is generated in these fights, the parties of the fight think that everything the other person does is intentional – even if it is not.
Likewise, because we think that the chairman has done something wrong by harming the environment, which triggers a negative reaction, our judgment about intentionality is biased. Because people see the chairman in a negative light, they are more likely to think that he brings about the harmful side effect intentionally.
There are many other possible explanations for the Knobe effect and more research has also been conducted about the effect that moral judgment has on our decision-making process, however I will be writing about those findings in a separate article. For now I would like the reader to be aware of the fact, that our decision making processes are not always as rational as they seem.