philosophy Archives - Rare Essays Papers on obscure topics including philosophy, political theory, and literature Sat, 12 Dec 2020 17:25:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 194780964 Two Perspectives on Catholic Philosophy: Alasdair MacIntyre and Pope John Paul II https://rareessays.com/philosophy/two-perspectives-on-catholic-philosophy-alasdair-macintyre-and-pope-john-paul-ii/ https://rareessays.com/philosophy/two-perspectives-on-catholic-philosophy-alasdair-macintyre-and-pope-john-paul-ii/#respond Sat, 12 Dec 2020 17:25:30 +0000 https://rareessays.com/?p=151 Alasdair MacIntyre in his 2009 monograph God, Philosophy, Universities gives what he calls a “selective history” of the Catholic philosophical tradition. This history of philosophy is centered on the relationship between the trinity of factors expressed in the title: God, philosophy, and universities, and culminates in a concluding chapter in which MacIntyre gives some insight […]

The post Two Perspectives on Catholic Philosophy: Alasdair MacIntyre and Pope John Paul II appeared first on Rare Essays.

]]>
Alasdair MacIntyre in his 2009 monograph God, Philosophy, Universities gives what he calls a “selective history” of the Catholic philosophical tradition. This history of philosophy is centered on the relationship between the trinity of factors expressed in the title: God, philosophy, and universities, and culminates in a concluding chapter in which MacIntyre gives some insight regarding how Catholic philosophy ought to proceed in the twenty-first century. MacIntyre notes in particular the importance of Pope John Paul II’s encyclical Fides et ratio insofar as it provides a contemporary Catholic account of the interdependence of philosophy and theology as well as what MacIntyre calls a “redefinition” of the Catholic philosophical tradition for the modern era. Thus, MacIntyre’s book, taking into account both the title and subtitle, could perhaps be described as a history of Catholic philosophy that is meant to set the stage for his critique of the contemporary university and to provide some suggestions for a much-needed response to the rise of secularism from contemporary Catholic philosophers.

To do so, MacIntyre offers an elaborate albeit selective history of what he calls “the Catholic philosophical tradition.” He begins with discussions of its precursors among such late antique and early medieval thinkers as Plotinus, Porphyry, St. Boethius, and St. Augustine, Maimonides, and the Muslim interpreters of Augustine. MacIntyre then discusses in the main corpus of the book the history of Catholic philosophy, including its genesis in the golden age of scholasticism in the High Middle Ages, its challenge by early modern thinkers beginning with the problems of the philosophy of René Descartes, its subsequent “absence” from philosophical dialogue from circa 1700 to 1850, and its reemergence in various branches in the works of contemporary Catholic philosophers such as Newman, Anscombe, Gilson, Maritain, St. Edith Stein, Leo XIII, and John Paul II. MacIntyre and Pope John Paul II in Fides et ratio offer several insights on the vocation of Catholic philosophy as well as an historical account of its substantive unity.

What is the Catholic Philosophical Tradition?

For MacIntyre, the Catholic philosophical tradition is a method of philosophical enquiry that is concerned with the interrelation of three core elements: God, philosophy, and universities. MacIntyre comments regarding the central, irreplaceable role of God in Catholic philosophy that: “…Finite beings who possess the power of understanding, if they know that God exists, know that he is the most adequate object of their love, and that the deepest desire of every such being, whether they acknowledge it or not, is to be at one with God.”[1] In a somewhat surprising move, MacIntyre then proceeds to concede three problems intrinsic to theistic philosophy that he claims can never be resolved fully by means of rational enquiry. First, there is the classic problem of evil; that is to say, the apparent contradiction between God’s innate omnipotence and goodness vis-à-vis the abundance of suffering and evil in the world. Second, MacIntyte concedes the problem of free will vs. determinism, which is defined by the apparent contradiction between God’s omniscience and the power of finite beings to act as proximate causes.[2]

Last, MacIntyre notes that theistic philosophers face a fundamental problem insofar as they must concede that if God is indeed omnipotent and eternal, then no earthly language, even philosophical language, could ever adequately describe His nature. Thus, one cannot have quidditative knowledge of God in this world. MacIntyre notes that: “All three problems… are internal to theism, not just problems posed from some external standpoint by some critic dismissive of theism. Those problems would still arise for theists even if no one had ever been an atheist, thereby showing that theism is philosophically problematic.”[3] MacIntyre concedes these points, of course, not to discredit theistic philosophy, for he makes it quite clear that the problems inherent to philosophical systems that deny the existence of God, such as skepticism, are irreconcilable in a much more detrimental way. Rather, he concedes these problems as inherent to the study of philosophy because he argues throughout the book that philosophical enquiry leads to the truths of revealed theology, and that the fact that problems arise when finite beings attempt to understand aspects of the infinite God by rational enquiry is to be expected.[4]

The other constituents of the Catholic philosophical tradition for MacIntyre are a certain set of presuppositions about “philosophy” itself as well as an understanding of the university as a place where all the particular sciences are united and interrelated with respect to theology, the study of God qua the “whole”. MacIntyre notes immediately in his introductory chapter on “philosophy” that at first, philosophy and theology seem to be incompatible, because theology, which begins with divine revelation and proceeds by means of faith, insists that the God of the three great monotheisms, Christianity, Islam, and Judaism, requires our total unquestioning obedience. On the other hand, philosophy begins with the human mind’s desire to know and proceeds by means of rational activity, which demands that all assertion are subject to questioning and philosophical analysis.[5] MacIntyre outlines three possible responses to this apparent incompatibility of Faith and Reason: an absolute rejection of theism as unreasonable, an absolute rejection of philosophy as impious, or a rejection of the claim of incompatibility itself. The Catholic philosophical tradition adopts the third response and argues for not only the compatibility but the codependence of philosophy and theology and of Faith and Reason as correlative means of inquiry. Commenting on Catholic philosophers’ approach to this important debate, MacIntyre writes: “…Is this complex set of attitude possible? It is so only if faith in God, that is, trust in his word, can include faith that, even when one is putting God to the question, one can be praising him by doing so and can expect to be sustained by him in that faith.”[6] Thus, the Catholic philosopher ought not to fear delving into problems that concern both philosophy and theology such as proofs of God’s existence, the nature of angels, or attempts to understand Creation, but he submits God to scrutiny with the ultimate intention of bringing Him praise and glory by doing so.

Pope John Paul II offers profound insight to the question “What is the Catholic philosophical tradition?” in his 1998 encyclical Fides et ratio as well. This encyclical is recommended enthusiastically by MacIntyre as a crucial account of the genesis and unity of Catholic philosophy as well as a “battle plan” for how it should respond to the challenges of secularism in contemporary society.[7] In particular, the pontiff goes into greater detail than MacIntyre in his account of the roots of philosophical inquiry in Christian Scripture, Tradition, and the nature of the human person itself, which eventually led to the formation of a distinctly Catholic philosophical tradition. The pontiff makes it very clear that in his view the unity of Catholic philosophy is utterly dependent on the necessary interrelation of Faith and Reason in his poignant opening line: “Faith and reason are like two wings on which the human spirit rises to the contemplation of truth; and God has placed in the human heart, a desire to know the truth- in a word, to know himself- so that, by knowing and loving God, men and women may also come to the fullness of truth about themselves.”[8] John Paul II makes the crucial point that philosophical inquiry has emerged in some form or another in every culture in every age albeit in many different, often conflicting, systems of inquiry. The pope cites various pre-Christian thinkers in both Western and Eastern antiquity, including Lao-Tzu, Confucius, Aristotle, Homer, Siddhartha Gautama, and Plato, and authors of the Hebrew Scriptures as thinkers who has various insights to offer on what John Paul calls the “fundamental questions which pervade human life”, such as “Who am I? Where have I come from and where am I going? Why is there evil? What is there after this life? Thus, in a somewhat similar manner to MacIntyre, John Paul states, as an introduction to the specific realm of Catholic philosopher, that the great thinkers of all the great religions and civilizations of antiquity were all composed answers to the same set of ultimate questions. Thus, John Paul makes the very important point right from the beginning that he as the leader of one of the world’s great religions thinks recognizes that rational activity is part of, not opposed to, human nature. It is not necessarily antithetical to piety or divine revelation.[9]

John Paul’s account of the nature of Catholic philosophy differs from that of MacIntyre in the sense that Pope John Paul II offers a lengthy analysis of how rational enquiry has always been viewed as an intrinsic part of the Catholic tradition, even in its antecedents in the Wisdom literature of the Hebrew Scriptures. For John Paul, the three constituents of Catholic philosophy are knowledge of self, God, and the world, and he argues that in the Catholic philosophical tradition, “…Reason and faith cannot be separated without diminishing the capacity of men and women themselves, the world and God in an appropriate way. There is thus no reason for competition of any kind between reason and faith: each contains the other, and each has its own scope for action.”[10] John Paul II comments on the Wisdom literature of the Hebrew Scriptures, such as the Books of Sirach, Proverbs, and Tobit, that it is remarkable that they carry not only faith tradition of Israel but also “the treasury of cultures and civilizations which have long since vanished.”[11] John Paul notes that even before the birth of Christ, our Jewish forefathers had an intimate understanding of the relationship between knowledge gained in Faith and knowledge learned through rational enquiry. Scripture lauds the wise man as a seeker of truth, the one truth seen through the two lenses of Faith and Reason; the pontiff quotes from Sirach: “Happy the man who meditates on wisdom and reasons intelligently, who reflects in his heart on her ways and ponders her secrets… He places his children under her protection and lodges under her boughs; by her he is sheltered from the heat and he dwells in the shade of her glory.”[12] The ancient Jews even had some insight into the necessity of Faith and Reason as correlative principles as evidenced in the passage: “The human mind plans the way, but the Lord directs the steps.”[13] Thus, whereas MacIntyre later argues that the Catholic philosophical tradition did not begin until the High Middle Ages, John Paul II sees its roots in Christian antiquity and even in the pre-Christian Wisdom literature of Egypt and Israel.

The Origins of the Catholic Philosophical Tradition

The pontiff sees an continuous recognition of the importance of Reason as a means of better understanding the mysteries of the Faith in the New Testament and early Christian writings as well. The most direct interaction between Greek philosophy and the Christian faith is found in Acts of the Apostles, Chapter 17, in which St. Paul undergoes a missionary journey to Greece in attempt to convert the pagans there to Christianity. St. Paul must have been shocked and somewhat appalled by the plethora of pagan idols in the city, John Paul comments, but his approach to evangelization in the “city of philosophers” was quite clever and an important model for the subsequent Catholic philosophical tradition. He said to the Greeks: “Athenians, I see how extremely religious you are in every way, for as I went through this city and looked carefully at the objects of your worship, I found among them an altar with the inscription, ‘To an unknown god.’ What therefore you worship as unknown, this I proclaim to you.”[14] Instead of adopting an entirely antithetical stance to the philosophical tradition that the Greeks treasured so much, then, the wise St. Paul rather chose to search for common ground between their philosophy and the Christian Faith he came to preach. He did so by lauding them for being “extremely religious in every way,” and used this as a starting point to preach to them about the One God who is Creator and Sustainer of the whole world. St. Paul was able to find this common ground by recognizing a certain transcendental nature of philosophy, albeit pagan Greek philosophy. John Paul comments on this recognition of its transcendental character: “The Apostle accentuates a truth which the Church has always treasured: in the far reaches of the human heart there is a seed of desire and nostalgia for God.”[15] The pontiff concludes his chapter on Scriptural recognition of the dual role of Faith and Reason by noting that no one, neither philosopher nor ordinary person, can escape asking the ultimate questions with which philosophy is concerned. This universal, innate wonder shared by all mankind is what Aristotle called our “desire to know,” and the Catholic philosophical tradition recognizes that ultimately this desire to know can only be fulfilled by God, the ultimate end of Man.[16]

MacIntyre leaves Scripture out of his account entirely and views all antiquity, even Christian antiquity, as but a precursor to the genesis of the Catholic philosophical tradition. He insists that the Catholic philosophical tradition proper did not begin until the High Middle Ages, in which philosophy was recognized as a legitimate mode of enquiry in its own right for the first time in Christian history. One reason he cites for this somewhat unusual claim is that in the Middle Ages, there was a unique recognition among Catholic philosophers that each of the liberal arts had something distinct to offer to an “integrated body of secular knowledge,” and that theology had a unique role as the “Queen of the sciences” in connecting the various particular sciences to the focal point of all knowledge: God.[17] For MacIntyre, the emergence of great Catholic universities in the High Middle Ages as places where the Church did its thinking is the most important defining feature of a truly Catholic philosophical tradition. He states: “It was because thirteenth-century European universities…became scenes of intellectual conflict, places where the fundamental issues that divided and defined the age were articulated, that their history provides the setting for the emergence of the Catholic philosophical tradition.”[18] Thus, with this university-centered definition of the Catholic philosophical tradition, great thinkers of Christian antiquity and the early medieval period such as St. Augustine, St. Boethius, and St. Anselm are not considered to be genuine partakers in the Catholic philosophical tradition.

MacIntyre admits that the theology of St. Augustine dominated throughout the Middle Ages, as did his provisions for the interpretation of Scripture, but he argues that the philosophy of St. Augustine was distinct from the Catholic philosophical tradition insofar as it viewed philosophical enquiry as a means to the end of understanding tenets of the Christian faith rather than a legitimate means of enquiry in its own right. In other words, philosophy is the “handmaiden” of theology rather than its complement. MacIntyre argues that Augustinian thinkers were at times even opposed to the integration of philosophy and theology that became the crowning achievement of the scholastic project. He states regarding this antipathy: “The question of the relationship of philosophy to theology became the question of the relationship of Aristotle’s philosophy- and science- to theology. Many Augustinian theologians found… reason to reject…the Aristotelian claim that philosophical enquiry has its own standards and methods, independent of those of theology.”[19] On the other extreme, MacIntyre makes the important point that the Averroistic Aristotelians at the Sorbonne proposed a radical theory called the “doctrine of two truths.” This mottled attempt to explain the relationship of philosophical truth to theological truth taught that the truths of Aristotelian philosophy and Catholic Christianity are, as they seem to be due to the aforementioned problems intrinsic to theistic philosophy, contradictory. However, one can still be a devout Catholic and a follower of Aristotelian philosophy without contradiction by conceding to two separate sets of truths- those of philosophy and those of theology. For instance, as an Aristotelian one accepts the eternity of the world as philosophical knowledge, but as a Catholic, one believes in the Creation doctrine as theological truth.[20] This blatant violation of the principle of contradiction did not, of course, solve the problem of the relationship between philosophical and theological truth. It did, however, illustrate the great need for a comprehensive philosophical system compatible with Catholicism that could account of the principles of philosophy as means of inquiry in itself, and this project was undertaken by St. Thomas Aquinas.

The Response of St. Thomas Aquinas

Pope John Paul II does not agree with MacIntyre, of course, that the Catholic philosophical tradition does not emerge until the thirteenth century. He sees it as one continuous development through the centuries of understanding of the relationship between Faith and Reason and sees Catholic philosophy present already even in the times of the apostles. He does, however, agree with MacIntyre regarding the pivotal role that St. Thomas Aquinas played in Catholic philosophical tradition insofar as he proposed a philosophical system that accounted for philosophy as a means of enquiry independent of theology. The pontiff devotes an entire section to the Angelic Doctor in Fides et ratio entitled “The enduring originality of the thought of St. Thomas Aquinas,” and comments: “In an age when Christian thinkers were rediscovering the treasures of ancient philosophy…Thomas had the great merit of giving pride of place to the harmony which exists between faith and reason. Both the light of reason and the light of faith come from God, he argued; hence there can be no contradiction between them.”[21] St. Thomas rejected those who distrusted philosophy as intrinsically opposed to the Christian faith just because it came from primarily pagan sources.

Rather, St. Thomas argued that all truths come from God, and believed that the truths of Aristotelian philosophy, if properly expounded in the light of Christian Revelation, could be of great help in providing rational justification for some of truths of Christianity. Pope John Paul II credits St. Thomas for coming to the recognition that: “Faith therefore has no fear of reason, but seeks it out and has trust in it. Just as grace builds on nature and brings it to fulfillment, so faith builds upon and perfects reason. Illumined by faith, reason…finds the strength required to rise to the knowledge of the Triune God.”[22] Last, John Paul also credits St. Thomas for being an “apostle of truth.” One of St. Thomas’ greatest contributions to Catholic philosophy, John Paul II notes, was his recognition that regardless of its source, every truth is a gift of the wisdom of the Holy Spirit. Thus, St. Thomas, unlike the Averroists, was able to provide a justification for philosophical enquiry that respected the magisterium of the Church and would become a model for a Catholic understanding of philosophy and theology as two paths to the same truth of the One God.

The Absence of Catholic Philosophy in the Early Modern Era

Both Pope John Paul II and MacIntyre acknowledge the regrettable absence of Catholic philosophy in the early modern era following the golden age of scholasticism in the High Middle Ages. Indeed, MacIntyre notes that many of the problems faced by contemporary Catholic philosophers are the result of one hundred and fifty years of relative silence (c. 1700-1850) from Catholic philosophers; he states: “To have to reckon with all of these [secular challenges] now is part of the price that Catholic thought has had to pay for its absence from the philosophical scene during those periods in which these secularizing modes of thought were first developed.”[23] MacIntyre, then, attributes a significant portion of the “blame” for some of the grievous errors made by early modern philosophers to Catholic philosophers who never entered the realm of the debate in the first place. He states regarding this period of absence: “There was in consequence no dialogue between Catholic philosophers and the seminal thinkers in the development of modern philosophy. Where philosophy flourished, Catholic faith was absent. Where the Catholic faith was sustained, philosophy failed to flourish.”[24] The Catholic philosophical tradition was still passed on in seminaries, Catholic universities, and Dominican and Franciscans houses of study, but it was an “arid restatement” of old ideas bereft of philosophical introspection.[25]

MacIntyre notes the bitter irony that René Descartes, a devout Catholic who believed that his philosophical system could prove the existence of God beyond a shadow of a doubt, played a pivotal role in the divorce of Faith and Reason following the decline of scholasticism in the Late Middle Ages. Descartes’ philosophy based on doubt endured, but his proofs of God were recognized as intrinsically flawed almost immediately after his own lifetime, and thus his successors, such as Malebranche, Leibniz, and Spinoza, were left with the monumental task of solving the mind-body problem perpetuated by Cartesian dualism. It was only a matter of time, MacIntyre notes, before Cartesianism led to skepticism in the philosophy of David Hume, who begins with Cartesian doubt and concludes that there can be neither objective knowledge nor ontological certainty at all.[26] This skepticism, of course, is totally contrary to the harmonious view of Faith and Reason promoted by St. Thomas and the other scholastic thinkers. MacIntyre notes on this growing disconnect between Faith and Reason in the modern era: “As modern philosophy moved beyond its Cartesian beginnings, its conception of the nature and limits of human knowledge and of the universe, insofar as it is knowable, leaves no place for the God of theism.”[27] Thus, MacIntyte argues that the Catholic philosophical tradition, having emerged rather recently in the thirteenth century, had already denigrated into a period of isolation by the eighteenth century.

Pope John Paul II also discusses the absence of Catholic philosophy in the early modern era in Fides et ratio, but his perspective differs from that of MacIntyre insofar as it is based on the denigration of scholastic principles rather than key figures in philosophical history. John Paul II criticizes Cartesian and post-Cartesian contributions to the separation of Faith from Reason without attributing blame to any specific philosopher when he states: “As a result of the exaggerated rationalism of certain thinkers, positions grew more radical and there emerged eventually a philosophy which was separate from and absolutely independent of the contents of faith.”[28] John Paul argues that not only did early modern philosophy develop a mistrust of Faith, but also “an even deeper mistrust” of Reason that led to agnosticism, skepticism, and, ultimately, nihilism. The pontiff makes draws a sharp distinction between scholasticism and early modern philosophy when he states: “In short, what for patristic and medieval thought was in both theory and practice a profound unity, producing knowledge capable of reaching the highest forms of speculation, was destroyed by systems which espoused the cause of rational knowledge sundered from faith and meant to take the place of faith.”[29] John Paul cites the poisonous effects of this skepticism in the particular sciences, such as the atheistic positivism that has come to dominate the physical sciences. John Paul laments in particular the nihilism that came to fruition in the nineteenth and twentieth centuries; he considers nihilism to be even further opposed to the Catholic philosophical tradition than skepticism because it reduces philosophy to an end in itself. The search for truth, then, becomes a hopeless search devoid of any objective meaning or truth.[30] Last, the pontiff laments how the divorce of Reason and Faith has led each to spiral into extremism: “Deprived of what Revelation offers, reason has taken side-tracks which expose it to the danger of losing sight of its final goal. Deprived of reason, faith has stressed feeling and experience, and so run the risk of no longer being a universal proposition.”[31] John Paul goes so far as to blame the excesses of unconstrained philosophy for the great human rights atrocities of the twentieth century, and concludes this section with an appeal for contemporary Catholic philosophers to recover the “profound unity” of Faith and Reason.[32]

Contemporary Catholic Philosophy: Challenges for Today and Tomorrow

The final two chapters of MacIntyre’s God, Philosophy, Universities are devoted to an analysis of the problems facing contemporary Catholic philosophy today as well as some suggestions for how Catholic philosophers ought to respond to them. One major problem is the denigration of the university, even the Catholic university, into a loosely defined collection of particular sciences that no longer have any established unifying principle. MacIntyre laments how the university, once the single defining characteristic of the Catholic philosophical tradition, has “now largely become a prologue to specialization and professionalization.”[33] There is no one whose job it is to account for the unity or even the interaction of the various disciplines, and thus there is always a temptation for particular sciences to view themselves as sufficient sources of knowledge without respect to any other discipline. Another serious problem is the relegation of philosophy to a particular discipline and the eradication of theology altogether from the pedagogies of many contemporary universities.[34] Thus, the contemporary university, much to the detriment of society, is no longer a place where students can ponder what it is to be human. MacIntyre agrees with Pope John Paul II that there is a great need for a response from Catholic philosophy to the challenge of secularism, and recommends Fides et ratio as the defining text of the mission of the Catholic philosopher in contemporary society.[35] He notes that in one sense, contemporary Catholic philosophy still allows for a wide range of philosophical disagreement, insofar as it includes such diverse philosophies as the phenomenology of St. Edith Stein, the analytic philosophy of G. E. M. Anscombe, and the neo-Thomism of Jacque Maritain and Etienne Gilson. However, these Catholic philosophers all agree on basic principles as presuppositions of philosophical enquiry, chief among them being the interdependence of Faith and Reason as correlative principles.[36]

Pope John Paul II sets forth an action plan for contemporary Catholic philosophy to reestablish an understanding of the intrinsic harmony of Faith in Reason in the final chapters of Fides et ratio. He concludes the encyclical by stating: “Philosophical enquiry can help greatly to clarify the relationship between truth and life… and above all between transcendent truth and humanly comprehensible language. This involves reciprocity between the theological disciplines ad…philosophy. Such reciprocity can prove genuinely fruitful for the communication and deeper understanding of the faith.”[37] John Paul argues that Catholic philosophers also must accept the magisterial authority, and let it guide them in their philosophical enquiry without fear that it will “restrict” their search for truth. Such an objection is not reasonable for a Catholic philosopher, the pontiff argues, because the Catholic faith has taught since the time of St. Thomas Aquinas that philosophy and theology are both autonomous disciplines with their own legitimate means of enquiry. The magisterium never proposes answers to philosophical questions of its own, but rather serves as a beacon of light to philosophers to ensure that their search remains one of truth rather than utility.[38] John Paul also recommends that Catholic philosophers recognize the unique significance of the Second Vatican Council’s emphasis on the dignity and freedom of the human person for them. Catholic philosophers have an obligation to remind the world that the desire for truth which every person has is a gift from God, not a tantalizing impetus for a hopeless journey.[39] Last, John Paul asserts: “The Church has no philosophy of her own nor does she canonize any one particular philosophy in preference to others.”[40] Thus, it is perfectly acceptance that there exists a multiplicity of perspectives even within the Catholic philosophical tradition, because they all agree on the fundamental presuppositions of Catholic philosophy. Catholic philosophers ought to proceed in discourse with their secular counterparts, then, so that they may convince skeptics of the fundamental principles of Catholic philosophy espoused by the Catholic philosophical tradition. Indeed, John Paul states, they have an obligation to do so, for it is not the mission of the Church to correct errant philosophy, but that of her own philosophers.[41]

Conclusion: The Unity and Vocation of the Catholic Philosophical Tradition

Alasdair MacIntyre’s monograph God, Philosophy, Universities and Pope John Paul II’s encyclical Fides et ratio offer important contemporary Catholic perspectives on the unity and vocation of the Catholic philosophical tradition. For MacIntyre, the Catholic philosophical experience is defined by the relation of God and philosophy as studied in universities, places not only for secular education but where students learn what it is to be a human person from the perspectives of individual disciplines with respect to theology, the study of the divine. MacIntyre places such emphasis on the distinctive role of the university in Catholic philosophical tradition that he argues that Catholic philosophy did not begin until the Golden Age of scholasticism in the thirteenth century. John Paul II views the Catholic philosophical tradition as one continuous strand of recognition of Faith and Reason as correlative principles that began in Christian antiquity and even has discernible roots in Sacred Scripture. Both MacIntyre and John Paul concur that St. Thomas Aquinas played a pivotal role in the history of Catholic philosophy by developing a holistic account of philosophy and theology as unique albeit interdependent disciplines with their own valid methods of enquiry. For St. Thomas and the scholastics, all truths come from God, and therefore Catholic philosophy recognizes that philosophical truth and theological truth are two paths to the same Truth of Gods’ Revelation. MacIntyre and John Paul join together in lamenting the subsequent divorce of Faith and Reason in modern philosophy; MacIntyre criticizes the skepticism of early modern philosophy as well as the absence of a Catholic response to it whereas John Paul focuses his critique on the nihilistic philosophy of the early nineteenth century. Both men agree that Catholic philosophers have an obligation to formulate a long-overdue response to the challenge of secularism. MacIntyre and Pope John Paul II attest that the defining characteristic of the Catholic philosophical tradition is its insistence on Faith and Reason as correlative principles, and agree that contemporary society must be reminded by Catholic philosophers of this fundamental principle of the search for truth.

Resources

MacIntyre, Alasdair. God, Philosophy, Universities: A Selective History of the Catholic

Philosophical Tradition. Lanham, MD: Rowman & Littlefield Publishers, Inc., 2009.

Pope John Paul II. Fides et ratio. Encyclical letter on the relationship between Faith and Reason,

Vatican Web site. 9 December 2010. http://www.vatican.va/holy_father/john_paul_ii/encyclicals/documents/hf_jp-ii_enc_15101998_fides-et-ratio_en.html.


[1] Alsadair MacIntyre, God, Philosophy, Universities: A Selective History of the Catholic

Philosophical Tradition, (Lanham, MD: Rowman & Littlefield Publishers, Inc., 2009) 6.

[2] Ibid.

[3] Ibid., 7.

[4] Alasdair MacIntyre, God, Philosophy, Universities, 14.

[5] Ibid., 13-14.

[6] Alasdair MacIntyre, God, Philosophy, Universities, 14.

[7] Ibid., 176-8.

[8] Pope John Paul II, Fides et ratio, Encyclical letter on the relationship between Faith and Reason,

Vatican Web site. 9 December 2010, http://www.vatican.va/holy_father/john_paul_ii/encyclicals/documents/hf_jp-ii_enc_15101998_fides-et-ratio_en.html, Introduction.

[9] Pope John Paul II, Fides et ratio, Sec. 1.

[10] Ibid., Sec. 16-17.

[11] Pope John Paul II, Fides et ratio, Sec. 16.

[12] Sir. 14:20-27.

[13] Proverbs 16:9.

[14] Acts 17:22-23.

[15] Pope John Paul II, Fides et ratio, Sec. 24.

[16] Ibid., Sec. 27.

[17] MacIntyre, God, Philosophy, Universities, 62.

[18] Ibid., 65.

[19] MacIntyre, God, Philosophy, Universities, 66-67.

[20] Ibid., 67-68.

[21] Pope John Paul II, Fides et ratio, Sec. 43.

[22] Ibid.

[23] MacIntyre, God, Philosophy, Universities, 66-67.

[24] Ibid., 133.

[25] Ibid.

[26] MacIntyre, God, Philosophy, Universities, 132-33.

[27] Ibid., 132.

[28] Pope John Paul II, Fides et ratio, Sec. 45. It is also noteworthy that Pope John Paul II, contra MacIntyte, lumps the patristic and medieval traditions together in their recognition of the importance of Faith and Reason, albeit in variant proportions.

[29] Ibid.

[30] Pope John Paul II, Fides et ratio, Sec. 46.

[31] Ibid., Sec. 48.

[32] Ibid.

[33] MacIntyre, God, Philosophy, Universities, 173.

[34] Ibid., 176-77.

[35] Ibid., 165.

[36] MacIntyre, God, Philosophy, Universities, 169.

[37] Pope John Paul II, Fides et ratio, Sec. 99.

[38] Ibid., Sec. 49.

[39] Ibid., Sec. 60.

[40] Pope John Paul II, Fides et ratio, Sec. 49.

[41] Ibid., Sec. 50 & 51.

The post Two Perspectives on Catholic Philosophy: Alasdair MacIntyre and Pope John Paul II appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/two-perspectives-on-catholic-philosophy-alasdair-macintyre-and-pope-john-paul-ii/feed/ 0 151
Kant’s Prolegomena as an Argument Against Hume’s Skepticism https://rareessays.com/philosophy/kants-prolegomena-as-an-argument-against-humes-skepticism/ https://rareessays.com/philosophy/kants-prolegomena-as-an-argument-against-humes-skepticism/#respond Wed, 09 Dec 2020 07:43:26 +0000 https://rareessays.com/?p=88 Immanuel Kant is widely regarded as one of the most influential philosophers of the modern era. In his work, Prolegomena to Any Future Metaphysic (full text), which is based upon and contains selections from his work, A Critique of Pure Reason, Kant argues that he has discovered a means by which one can escape the […]

The post Kant’s Prolegomena as an Argument Against Hume’s Skepticism appeared first on Rare Essays.

]]>
Immanuel Kant is widely regarded as one of the most influential philosophers of the modern era. In his work, Prolegomena to Any Future Metaphysic (full text), which is based upon and contains selections from his work, A Critique of Pure Reason, Kant argues that he has discovered a means by which one can escape the skepticism which was prevalent at his time in Western metaphysical thinking, thus allowing one to have certain knowledge of the physical world. The writings of David Hume were largely responsible for this prevalent skepticism, therefore, Kant’s writings on the subject can be seen as, in a large part if not in totality, an argument against the writings of Hume, namely Hume’s work entitled, An Enquiry Concerning Human Understanding (full text). This being so, in this essay, we will be exploring the ideas of both Hume and Kant in an attempt to explain the manner in which Kant avoids the skepticism that is depicted by Hume. Before we can do this; however, we must understand that ideas of Hume and how they lead us into skepticism.

Hume’s argument for skepticism in Enquiry

In the Enquiry, Hume begins with the statement that human beings have no innate knowledge and that all ideas are born from sensory impressions. Once we have these ideas in our mind, we are, in some cases, free to explore them without constant access to the sense perception, but only after we have first had an experience of whatever idea it is that we are exploring. After establishing this, he goes on to pose two categories which our thoughts can be divided into. The first of these categories he calls the “Relations of Ideas.” This category includes those few things which Hume believes we are able to have certain knowledge of. These things are primarily Mathematics, Geometry, and Logic. Hume believes that these things, though they must first be experienced through the senses are thereafter available to our minds in the same way that I, having seen a triangle, do not need to have a triangle in front of me in order to know what constitutes a triangle. The second category that Hume posits is known as the Matters of Fact. This category contains basically everything that is not, mathematic, geometric, or logical. The things which make up this category are totally dependent on the senses in the same way that, even after I have first experienced the color blue, I am unable to give a definition of blue other than to point to a sensory perception and say “That object is blue.” Next, he deals with the ways in which human beings organize these matters of fact. Hume states that we link the matters of fact together in our thought in three primary ways. The first of these ways is Resemblance. That is to say, that we group ideas together according the resemblance they bear toward one another, in the way that ice cubes resemble snow in being cold and frozen. Secondly, we group ideas according to Contiguity. This is to say that we group things in time and space according to when and where we sense them. Finally, we place ideas in causal relations by saying that event “A” causes event “B.” According to Hume, not only are these matters of fact completely dependent upon our sensory perceptions, but they are also wholly unable to be judged as existent or true in any way. This is, Hume states, because we have no way of sensing any force of causality. Therefore, we have no way of knowing if the linkages we draw between matters of fact exist, and we also have no way of knowing if there is any reality behind our sensations beyond pure sense perception. Also, because we have no understanding of causality and indeed no certainty in the existence of the world itself, we then are also completely unable to place our certain trust in the sciences. Lastly, because we have no sensation of causality, we have no way of knowing that the future will resemble the past, which means we can only act upon probabilities, rather than upon the belief that something will happen because it has happened in the same way before. Because we have no way of knowing of the existence of causality, or any like between future and past, we have no way of knowing what, if anything, causes our impressions of the world. Lastly, according to Hume, because all human ideas must be grounded in sense data, anything which is spoken of by a human being, but which cannot be experienced as an impression, is nothing more than a mixing of formerly sensed impressions, is worthless, and must be thrown out. Having made these statements, Hume leaves us with an unbridgeable epistemological gap between the self and the outside world, and no true and certain knowledge may be gleaned about the world beyond that gap. Kant, however, believes he has found an answer to Hume’s skepticism and, in the Prolegomena, he attempts to lay out the bridge across this epistemological gap. This is an undertaking that he is only partially successful in accomplishing.

Prolegomena to Any Future Metaphysics

 Analytic and Synthetic Judgements

First in the Prolegomena, Kant redefines Hume’s categories of ideas. This first category he names Analytic Judgments. These analytic judgments, Kant believes, are apriori in nature. By this, he means that they exist independent of the physical world, are intuitive, and are not derived from the physical world. Also, they can be known to be certain, and generally explain through process of definition and logic. I believe Hume’s example of “Bachelor” being defined as an unmarried man, would fit into this category. Likewise, this category corresponds roughly with Hume’s Relations of Ideas. The second category which Kant divides human ideas into corresponds roughly with Hume’s Matters of Fact. Kant names this category Synthetic Judgment, or Synthetic Aposteriori Judgment. The Synthetic Judgments are those judgments which are made through the use of the senses about the world around us. These Synthetic Judgments, Kant states, are not able to be known with certainty, and are dependent on the physical reality. An example of this category would be a statement along these lines. “This book bag is blue.” The truth of this statement is reliant on whether the book bag truly is blue or not and relies upon sensation for its confirmation. Though both of these categories are useful, Analytic judgments being useful in that they can be certain and synthetic judgments being useful in that they are applicable to the world, only these two categories would not be sufficient to allow Kant to escape from the skepticism which Hume found himself in. Therefore, Kant recognizes the need for a third category of ideas which are able to bridge the gap between the self and the external world with the certainty of judgments intact.

Synthetic a priori knowledge

Kant labels this third category Synthetic a priori Knowledge, and states that the things which fall under this category are both intuitive (a priori), and constant, but also must be applied to the world to form synthetic judgments which vary with changing sensation. Included in this category are The Forms of Sensibility, and the Categories of the Understanding. These two concepts are what Kant believes have enabled him to escape the skepticism of Hume. So, to fully understand Kant’s argument, we must explore the nature of these concepts. In setting out his thoughts, Kant began to study the world and came across a few simple truths about the way in which every human being experiences the world. These truths are as follows. Every human being experiences the world in both space and time. This is to say, that any object which is experienced by a human is experienced as existing somewhere in the space that is around them, and is also experienced as existing in time, be it the present time, as something they are currently examining, the past, as a memory of what once was, or the future, as an idea of something that will come to pass. Also in studying human sense perceptions, Kant comes to believe that there are only twelve types of judgments that can be made by a human being about the world. That is to say, that every judgment made by a human being about anything that exists to him or her physically will fit invariably into one of these twelve categories. These discoveries on Kant’s part form the basis for his theories of the Forms of Sensibility, which are space and time, and also the Categories of Understanding, which explain the way in which human beings process their experiences of the world. These forms and categories must have sense data with which to interact. Because of this, Kant posits the existence of the Noumenal World.

The post Kant’s Prolegomena as an Argument Against Hume’s Skepticism appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/kants-prolegomena-as-an-argument-against-humes-skepticism/feed/ 0 88
The Fragility of Socialist Utopias: Some Problems of Central Planning and Rationalist Design https://rareessays.com/philosophy/political-philosophy/the-contingency-of-socialist-utopias-some-problems-of-central-planning-and-rationalist-design/ https://rareessays.com/philosophy/political-philosophy/the-contingency-of-socialist-utopias-some-problems-of-central-planning-and-rationalist-design/#respond Mon, 07 Dec 2020 07:41:47 +0000 https://rareessays.com/?p=69 From time to time an author or thinker will create a work, often in the Utopian genre, which lays out a detailed design of an ideal society. Fourier’s phalanestères are one example: they are described as the structure of a social unit, all the way down to the number of inhabitants and to the shape […]

The post The Fragility of Socialist Utopias: Some Problems of Central Planning and Rationalist Design appeared first on Rare Essays.

]]>
From time to time an author or thinker will create a work, often in the Utopian genre, which lays out a detailed design of an ideal society. Fourier’s phalanestères are one example: they are described as the structure of a social unit, all the way down to the number of inhabitants and to the shape of the actual buildings that house them.

The general problem with these plans is that they lack generality over time and space. They fail the test of universality. The following will be my random walk through some of the problems with rationalist institutional construction and the subsequent problems of central planning.

Social planning is vulnerable to real-world changes

Most people would recognize that a particular building design or architecture can become obsolete. Many would laugh if there were an actual plan to actually construct Campanella’s City of the Sun or Fourier’s phalanxes in the present day. Their reasoning would be obvious: those things were designed in an entirely different time, under different circumstances. This is not to say that those authors and many like them put forth their ideas as timeless and never requiring change (some occasionally have had the delusion of technological growth simply stopping at one point), but a large degree of universality is frequently attached to more abstract kinds of social planning.

Some examples of central design are much more concrete than others, but central planning when it involves a particular kind of physical engineering is not the only instance in which central design encounters severe problems. It can also include institutional design. For a long time, it was thought to be sound business strategy to always have a middle-man for many kinds of transactions. With changes in technology, the middle-man has frequently been cut out, and with good reason: he’s no longer needed. Yet what would happen if, in my ideal construction of a society, there were always a middle man between wholesale and retail? What if I claimed that this middle man led to the greatest well-being of my society’s members? Economics would most certainly stand against me.

Institutions are not universal

Despite that, all kinds of social manifestos, utopias, and even national constitutions establish permanent institutions as a feature of the society. It can be a ruling council of Thirteen, a Guardian class, or a president, a 480-member congress, and a 11-member judiciary. They make the mistake of integrating information available at the current time and creating a set of concrete institutions that are to be held as universal, but are not in fact universal. This is symptomatic of a general problem with leftist thought, which is that it is often too concrete-bound in its approach to society. Those contingent concretes – such as the current distribution of income and power in society – are then used as premises from which “universal principles” are derived, like: there’s always the class of the rich and the class of the poor, and the former always oppress the latter. The problem is that those supposedly universal principles only apply in narrowly contingent cases, which makes them not universal (not even considering whether the derivation of those principles is valid). They ignore changing circumstances and technology (never mind all the other fallacies, like the total fabrication of principles of justice, ignorance of actual factors that cause poverty, etc.)

The general empirical principle underlying this is that no mind or group of minds can ever gather, process, and coordinate all of the information necessary to perfectly govern complex human conduct. Even without any normative principles relating to individual autonomy, the idea of governance – especially economic governance – by few over the many is riddled with problems, in theory and as it has been demonstrated in practice. Every economic agent has a delicately unique and complex set of circumstances and preferences, and has direct access to his own set. Supposing that someone trying to make economic decisions for this person was acting totally altruistically (another very generous premise, again as demonstrated in practice), he would require a means of translating that agent’s changing circumstances and preferences (closely related to subjective experiences of pain, pleasure, etc.) into usable information which he then must process to prescribe a course of action which must be then executed correctly. Multiply this process over thousands or millions of people, and there is quite a huge problem. It is wishful thinking already that one person can make decisions for another effectively (people already have enough problems making decisions for themselves), so it must be even more wishful to think that some people can do it for many others, even suspending for a moment the selfish interests of those decision makers.

Only the free market (which is run by, precisely, nobody) is capable of coordinating the largely diffuse information spread among economic agents into forming an optimum output. This is not just an optimum regarding maximal manufacturing output for the lowest possible cost, a common straw man constructed against the free market to paint it as a cutthroat institution of total efficiency. That notion is just a Platonic hangover – as if goods are produced for the goods’ sake – which ignores why those goods are created in the first place: to enhance an individual’s well-being. The free market forms an optimum output with respect to the amount of resources available, and, more importantly, to the totality of the individual preferences of all market participants.

Weaknesses in central planning

Very closely related to the information problem of central planning is pricing or, more broadly, valuation of goods, services, or virtually everything whose control and consumption can be transferred from one individual to another. Valuation by demand is self-defining: what someone is willing to pay for something is what it’s worth. No Platonism necessary, no intrinsicism, just pure empirical fact. In a centrally planned system that prohibits free association, value must be decided; otherwise, there is no meaningful way of allocating produced goods among the members of society. Again, suspending the selfish interests of the appraisers, this leads to bizarre information problems and to the humorous possibility of the “value” contributed by producers exceeding the amount of goods and services available in an economy, resulting in people deserving more than is possible to provide.

Another problem with central planning is, in brief, the actual presence of human beings. Markets can’t be avoided; the free market is all about incentives. Proof in practice of markets is the responsiveness to incentives embedded in human nature, no matter what system prevails. Black markets develop in response to government prohibitions; defying the law becomes a business, where risks are taken but large profits are reaped. In totalitarian systems (especially those with distributive wealth patterns, like in communism) individuals use their positions as or connections with bureaucrats and politicians in order to gain a bigger share of the pie. Even in our purportedly “free” economy in which the government intervenes to harness the “dangers” of the free market, interest groups spend billions of dollars yearly lobbying federal, state, and local governments getting laws passed in their favor to the detriment of others and electing politicians and bureaucrats who use the force of the law to increase business profits.

(Incidentally, the few errant cases in which people’s preferences are static and minimized do not undermine this universality of the human condition, for the reason that incentives can be structured to shun accumulation of material possessions or other conventional measures of well-being. Some tribes have a social value of personal prestige over wealth, and thus individual members will often spend all of their wealth on extravagant feasts for the tribe or on constructing large memorial edifices.)

Up to this point I’ve freely switched back and forth between central institutional design and central planning. Though there is a distinction between the two, they ultimately suffer from the same problems. First, even in a static environment, central design and planning simply lack the coordination of information necessary to achieve anything close to efficiency. Gathering the information is either next to impossible or is so costly to achieve that it defeats the purpose of establishing any institutions in the first place. Then, not only must the institution measure up to the circumstances of the time, it must be resilient and adaptable to the rapidly changing and non-ergodic world. The environment changes. Technology changes. People change. If the institution itself entails an active form of intervention (such as value arbitration, as in Marxism), the central planners constantly face the problem of incomplete and changing information.

Any societal plans that establish hard-and-fixed institutions and that rely on constant governance are prone to disaster, especially when abuse of power is considered. Up to this point, I’ve neglected to address that fact, which is the most important of all: much of the preceding discussion generously takes for granted that those involved in the central planning have no interest but doing their job the best they can. For the most part, that means that I’ve ignored an even more fundamental flaw in central planning. Yet even with that, it still had problems, didn’t it?

The post The Fragility of Socialist Utopias: Some Problems of Central Planning and Rationalist Design appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/political-philosophy/the-contingency-of-socialist-utopias-some-problems-of-central-planning-and-rationalist-design/feed/ 0 69
Video Games, Violence, and Society: a Defense https://rareessays.com/philosophy/video-games-violence-and-society/ https://rareessays.com/philosophy/video-games-violence-and-society/#respond Mon, 07 Dec 2020 07:40:52 +0000 https://rareessays.com/?p=67 I love video games. Lots of us do. Yet our love is not always shared, and many have asked about the potential social impacts of games: do they cause violence? Do they cause deviant, disruptive, or otherwise antisocial behavior? Since the tragic Columbine shootings, whose perpetrators were players of the revolutionary first-person-shooter Doom, video games […]

The post Video Games, Violence, and Society: a Defense appeared first on Rare Essays.

]]>
I love video games. Lots of us do. Yet our love is not always shared, and many have asked about the potential social impacts of games: do they cause violence? Do they cause deviant, disruptive, or otherwise antisocial behavior? Since the tragic Columbine shootings, whose perpetrators were players of the revolutionary first-person-shooter Doom, video games have been called into question for their supposed negative effects on society. While this began, naturally, with investigation of the presence of violence in games, the backlash against video games has also entered the realm of sexual content, profanity, counter-productivity, and other social taboos.

It is impossible to perfectly stratify discussion about video games in society into neat categories. With that in mind, one must contemplate the following facts when considering the place of video games in society, as I will in this paper. Video games are the product of the brilliance of technology, and conceptually they are the single medium that will come ever closer to the fullest representation of reality and pseudo-realities. However, there exists a conservative element of society that imagines that video games are empty, brain-draining activities upon which children and adults spend wasteful hours, leading them to violent or lewd behavior and to the breakdown of society.

Part of this element comes from inexperience with and ignorance of video games; another part comes from an arbitrary view of society and morality; and another part, fundamentally, comes from a subconscious hatred of the good for being good. Video games are not just mindless, substance-free, sugary candies for the brain. They, like all other media, have the ability to be beautiful, emotional, intelligent, poetic, reflective, or any other adjective one can find to describe a piece of art, yet they can do it in an exceptionally new way. They put the consumer in the driver’s seat, saying, “make this experience your own,” whether it is in custom character creation, open-ended problem-solving, or pervasive ethical quandaries. A full understanding of the educational and entertaining possibilities to be offered by the medium of video games, as well as the nature of its enemies, can lead to the full realization of its potential benefits.

Video games as simulations of an alternate reality

A false assumption to make about video games, especially in the modern day, is that they are ergodic- predictable, repetitive, or otherwise banal. Quite contrarily, the nature of logic and modern technology’s ability to manifest that logic on the computer screen has demonstrated repeatedly that video games (and their scientific counterparts, computer models) have lead to new kinds of situations, interactions, and understandings of things never observed before. Even in terms of game design, games have been played and optimized beyond ways that game developers would have ever expected. Though older game design may have been more directly and linearly construction with fewer possibilities, newer games have capitalized on the presence of new technologies- as well as the experiences learned from past greats- to create dynamic gaming experiences.

The more characteristics and variables programmed into more individual objects and entities, the greater and greater exponentially the possibilities become. The gradual evolution of video games is not toward “realism” per se, but toward immersion: natural consistency and dynamics. It is nonsensical to ask for realism in a game about Dungeons & Dragons, a universe jam-packed with magic, but that does not mean that anything goes: the magic must act as believably as it can, as though the game were saying “if magic actually existed, this is how it would behave.”

It is thus a misconception that video games provide the same singular, preprogrammed experience that movies or television provide. Though, of course, individuals have subjective responses to the same content in movies and television, games provide subjectivity of two orders: the first of the player, and the second of the content that is experienced itself. Beyond the simple spontaneity implemented into the games (randomized behavior of enemies, item appearances, etc.), the actual subjective presence of the player- whether it is in his ability to operate his character or his choices of action- affects the outcome of what actually occurs on-screen. The greater in complexity a game becomes, the more and more this becomes true.

This facet of video games- their non-ergodicity- is by far their most important characteristic. It is, as we have seen, what makes them unlike any media ever before. Generally speaking, the power of computers has given us the ability to imagine different objects of totally diverse natures, program them into a system of laws of interaction, and then sit back and watch the show. Biologists can now see things that are very logically real, but do not have to be directly observed or deduced by hand anymore; one researcher speaks of DNA shuffling for forecasting genetic behaviors: “we used thermodynamics and reaction engineering to evaluate and model this complex reaction network so we can now predict where the DNA from different parent genes will recombine.”[1] Economists can imagine economic actors of certain preferences, assume they are utility maximizing, plug in the amount of resources available, and learn about what kinds of things people will produce. In the same regard, the video gamer can ask, “let’s say we have a mountain lion fighting a huge wasp; who will win, and will the fight be awesome or lame?” The process is about imagining independent things and making assumptions about their characteristics, and then throwing them into a figurative box, shaking it, and then pouring it out to find out what is there.

Gaming as Art and Narrative

Some have argued that video games offer no valid mode of expression. In April 2002, U.S. District Judge Stephen N. Limbaugh, Sr. ruled that video games are not subject to first amendment protections under the constitution: “[There is] no conveyance of ideas, expression, or anything else that could possibly amount to speech. The court finds that video games have more in common with board games and sports than they do with motion pictures.”[2] Nothing could be farther from the truth, even besides the point that board games and even athletics can, too, be artistic in the creativity that goes into their rules and aesthetics. The games contemplated and cited in the court opinion, “Fear Effect,” “Doom,” “Mortal Kombat” and “Resident Evil,” were not only six to nine years old at the date of the court opinion, but had titles falsely cited as “Mortal Combat” and “Resident of Evil Creek.”[3] This is testament to the fact that inexperience with video games has a strong positive productive relationship with total, ape-like ignorance about them.

Though to some degree this paper is guilty of it, the primary problem with much formal academic research into video games is its insistence on “boxing” characteristics of games into neat little propositional packages. Usually, it is the result of an infrequent video game player conducting a study, or a frequent gamer attempting to appeal to a broader audience with his writing. The problem with this approach lies in attempting to convey the facets of such a complex kind of thing to someone who has never experienced it. Simply to say, “imagine something like a movie, where the player holds a controller that moves a character around on screen and makes him do things” clearly fails to capture all the qualitative essence of video games, especially in the present-day context. It is the equivalent of trying to explain to an 11th-century Catholic Bishop the concept of a car as “imagine something like a carriage, but one that moves by itself.” He, too, would condemn it, probably as the product of witchcraft, because he would not understand how it worked. The attempt, then, to “sound-bite” video game research certainly creates skewed perceptions of their supposed social implications.

Video games are free speech

Thankfully, the 7th District Court (which affected a much broader jurisdiction than the Limbaugh ruling) had previously upheld video games as free speech. The bottom line is that inanimate visual art, audio, and films are protected under freedom of expression, no matter whether their substance is contributory to public discourse or not. The same should go for video games, and relying on prejudice against the “new guy” will not suffice. Many games require just as much, if not plenty more effort than a single painting or a book. Development teams often number between thirty and two thousand people, frequently allocating many members simply to developing the plot and characters and making them believable. Besides all the technological input that must go into the game in order to make it playable on the user’s computer, the art for the “look” of the game must be drafted and implemented into three dimensional graphics, while voice-overs and sound effects must be created and integrated into the entire process seamlessly. A good game must be one with entertaining gameplay, an interesting plot, and appealing graphics and sound, meanwhile operating on a budget and somehow turning a profit.

Interactivity and character

The desire for interactivity in entertainment is, in some regards, a product of social evolution. Instead of watching television, children often play in imaginary worlds. Many adventurous persons have a passion for exploring the wilderness or traveling to different cities, to enjoy the alternative aesthetics and atmosphere. It is this same spirit that leads to the appreciation of quality video games. Movies and books do not afford the reader the kind of flexibility and freedom that video games do; the actions of the characters always happen no matter what the reader says or does, and all he can do is try to imagine otherwise. The video game provides the interface by which the audience can instead be the protagonist, for a change.

Many games, especially role-playing games, offer character customization schemes that both affect the aesthetic role of the character on screen (clothes, hair and skin color, body shape, facial features, etc.) as well as his substantive role (attributes, skills, and abilities). Throughout the game, characters can collect items or earn experience that gives them more abilities, often at the player’s choice. The result is character development, which leads to close identification with the character at play and even sentimental value (try deleting someone’s character in World of Warcraft and receiving an indifferent response).

The post Video Games, Violence, and Society: a Defense appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/video-games-violence-and-society/feed/ 0 67
Anti-laissez-faire Ideas since the Founding: 1870-1918 https://rareessays.com/philosophy/political-philosophy/anti-laissez-faire-ideas-since-the-founding-1870-1918/ https://rareessays.com/philosophy/political-philosophy/anti-laissez-faire-ideas-since-the-founding-1870-1918/#respond Mon, 07 Dec 2020 07:39:02 +0000 https://rareessays.com/?p=65 Most libertarians would say that capitalism is dead in America. Many on the left would say that it is still raging. It’s ultimately a matter of what you define as “capitalism” (voluntary exchange vs. large corporation mercantilism), but we can be sure that the voluntary exchange aspect is killed day by day, and has been […]

The post Anti-laissez-faire Ideas since the Founding: 1870-1918 appeared first on Rare Essays.

]]>
Most libertarians would say that capitalism is dead in America. Many on the left would say that it is still raging. It’s ultimately a matter of what you define as “capitalism” (voluntary exchange vs. large corporation mercantilism), but we can be sure that the voluntary exchange aspect is killed day by day, and has been attacked and defeated repeatedly in the past, particularly in the 20th century. But big pro-state changes like that don’t happen overnight. They’re usually preceded by years of philosophy (usually very bad) and state-caused problems, much civil unrest, and are followed by gigantic losses of liberty and increases in dependency on the state.

Let’s take a look at some of the philosophy of anti-laissez-faire, particularly in its heyday: just before the first World War. There is little doubt the explosive growth of America’s economy was the result of the great human effort, the application of knowledge to production to create technology and capital, and the vast land and natural resources at its disposal. The framework of classical liberal (in full form, laissez-faire) economics pioneered by Great Britain gave great incentive for this process. A century of liberalism arose from thousands of years before of dysfunctional human civilization, growing the population and standard of living of human beings far larger than ever before in any century.

However, following the Civil War and the Second Industrial Revolution, class divisions had grown and fresh voices bemoaned the supposedly unjust distribution of wealth in society, calling into question the validity of the free market. Though lacking true ideological conformity, changes in attitude toward laissez-faire capitalism[1] since the Founding have been generally defined by any or all of three major shifts: most importantly, the replacement of liberal political rights with economic entitlements; closely connected, a new emphasis on collective instead of individual good; and in effect, the belief in the use of government as a valuable tool for bettering those collectives.

Of course, some important qualifications must be made. Firstly, not all objections made to the state of the nation under capitalism in the late 19th– and early 20th– centuries were necessarily at odds with traditional liberal principles. Truly consistent advocates of laissez-faire capitalism such as William Graham Sumner believed that government obstruction of trade unions and other forms of collective bargaining[2], for example, interfered with the individual’s right to freedom of association and self-determination. Broadly speaking, the political environment that permitted wealth to buy power in government was an essential threat to traditional liberty. Furthermore, it would be disingenuous to attempt to collectivize the entire spectrum of objections to liberal society, as they can be vastly different in their moral values, justifications for their principles, and the nature and practical execution of their policies.[3] Overall, the following breakdown is only a brief approximation of the characteristics of those opposed to laissez-faire economics, with a select few of several possible examples.

The Rise of “Economic Freedom” As a Standard of Living

The issues of most profound significance to any attitude toward economic and legal systems are the moral concepts that underlie them. Almost universally, opponents of capitalism believed that wrong-doing necessarily occurred from its implementation, whether in its means or in its ends. Previously, most of an individual’s rights in America were defined by a Lockean theory of natural law. Freedom of contract (and a right to a fulfillment of those contracts) permitted one the ability to freely associate with others economically. However, great disparities in wealth concentration led critics of capitalism to denounce the status quo, which was allegedly caused by the consistent legal enactment of these principles. Factions such as the Populist movement, the Progressive movement, and the Socialist Party of America formed in the antebellum period as a response.[4] The introduction of a new kind of right pervaded these new alternatives to laissez-faire capitalism: the economic freedom.

Karl Marx’s famous maxim, “from each according to his ability, and to each according to his need,” was one widely accepted economic substitute for property rights. Looking Backward (1888), a novel by Edward Bellamy, details a futuristic society that has supplanted competition with economic rights and duties in line with Marx’s axiom. “The reward of any service depended not upon its difficulty, danger or hardship, for throughout the world it seems that the most perilous, severe, and repulsive labor was done by the worst paid classes…,” states Dr. Leete, a knowledgeable member of the society.[5] Indeed, this was not the case in 1887; the natural system of economic rewards resulting from liberal rights is, first and foremost, based on the mutual exchange of desired values, i.e. supply and demand. “Wage slavery” became a popular phrase to describe status of the common laborer. In The Living Wage (1906), John A. Ryan argues that the “American standard of living” is a “natural and absolute right” of citizenship. Though he argued it as a dictum of Christian values, many other leftists embraced a similar belief, and an ends-oriented theory of economic freedom gained popularity. No longer would individual autonomy provided by rights determine one’s economic freedom, but the level of wages would.[6]

Collectivism vs. Individualism

Logically entailed by the change in moral principles was an insistence that the good of the collective trumps the good of the individual. Since the notion of the fairness of market-defined wages was fully rejected, the market was replaced by newly-found social and moral considerations. Henry Demarest Lloyd, one of the foremost antagonists of Social Darwinism, placed great emphasis on collective governance and production. “Our liberties and our wealth are from the people and by the people,” he contends, “and both must be for the people.” His use of “the people” is not merely political euphemism, but imperative: “wealth, like government, is the product of the co-operation of all, and, like government, must be the property of all its creators.”[7]

Historically, a principal element of collectivization derived from stressing the importance of labor, in contrast to the capital-focused Industrial Revolutions of the 1800s. In 1914, Congress announced via the Clayton Act, “The labor of a human being is not a commodity.”[8] There is no better example of American labor-class activism than the writings of Socialist Party figurehead Eugene V. Debs. In Revolutionary Unionism (1905), Debs argues for the unity of the working class and, in Marxist form, condemns the purported separation of the worker from the rightful fruits of his labor. He repudiates the validity and effectiveness of craft unions- usually selective organizations of skilled workers- underscoring that “infinitely greater than [their] loyalty to their craft is their loyalty to the working class as a whole.”[9] He fiercely criticizes the structure that denies the struggling laborer his desires, but fervently protects “the product of [the worker’s] labor, the property of the capitalist.” Then, when the dissatisfied become agitated and unrest begins, the government arrives to silence the menace: “If you… have made more steel than your master can sell, and you are locked out and get hungry, and the soldiers are called out, it is to protect the steel and shoot you who made the steel…”[10] Debs’ arguments reflected common sentiments of outrage toward a society in which a vast majority of people, though they were a necessary part of production, toiled heavily and possessed little while a tiny group reaped gigantic rewards.

A different form of collectivism, nationalism- in the spirit of the times- also was a popular source of ideological opposition to the free market. Similar opinions had already a large presence during the Founding in the form of the Federalist Party and Alexander Hamilton, who argued for state intervention as a means of furthering the nation’s economic goals. Bellamy’s Looking Backward, which cued a short-lived but large nationalist movement, extolled the replacement of self-interest with a higher cause: “Now that industry of whatever sort is no longer self service, but service of the nation, patriotism, passion for humanity, impel the worker as in your day they did the soldier,” says Dr. Leete. Another thinker, Herbert Croly, believed nationalism belonged hand-in-hand with democracy, stating “the first duty of a good democrat would be that of rendering to his country loyal patriotic service.”

The Role of the State in the Capitalist Economy

Government would be the primary tool in executing these policies, with force as the only way to guarantee Americans their social and economic rights. As German sociologist Max Weber explained, “The rise of modern freedom presupposed unique constellations which will never repeat themselves.” These “unique constellations” likely refer to the vast expanses of land and resources in North America, among other contingent facts, which gave rise to the harmony provided by decentralization. Otherwise, freedom must be centrally planned to be had beyond its occurrence through plain luck. Bellamy comments that Americans in the nineteenth century possessed a “galling personal dependence upon others as to the very means of life.”[11] The founder of the American Economic Association, leader of an organization created to battle laissez-faire economics, wrote “we regard the state as an educational and ethical agency whose positive assistance is one of the indispensable conditions of human progress.”[12]

Woodrow Wilson, in fulfillment of many of Herbert Croly’s ideas, advocated a “New Freedom.” In The Meaning of Democracy (1912), he claims that while laissez-faire Jeffersonian ideals furnished “a government of free citizens and of equal opportunity,” the contemporary physical characteristics of the nation were suited to it; families each lived in separate households, employers were closer to their employees, and so forth (arguments very much similar to Weber’s “unique constellations”). Using Glasgow as an example, Wilson draws a metaphorical parallel between the Scottish city’s common hallways in residential buildings being defined as public streets and the “corridors” of large corporations being regulated as part of the public domain. In this, he claims he is fighting against “monopolistic control,” and in turn “fighting for the liberty of every man in America, and fighting for the liberty of American industry.” [13]  Not coincidentally, the Wilson administration heralded the introduction of the discretionary federal income tax through the Sixteenth Amendment in 1913.

Is true capitalism dead?

Clearly, attitudes toward laissez-faire capitalism have turned significantly against it since the Founding. This is not to suggest that there was unanimity over the issue during America’s formative years, but major policy battles accompanied by successful movements have led to aggregate changes in economic viewpoints. The prominent influences of the postbellum period, such as the Progressives, have nearly eradicated belief in the functionality and morality of absolute laissez-faire­ governance. Likewise, the public institutions established in the wake of those movements have furthermore ingrained the permanent, expanded role of government in the national consciousness (euphemistically speaking). Even “right-wing” politicians who profess the values of capitalism take their cues from business interests in exchange for financial and political support. Few candidates can plausibly survive electorally on a genuine non-interventionist policy platform. For America, the unabridged free market is dead.


[1] To clarify, any mention of “capitalism” alone still is referring to unlimited, absolute laissez-faire capitalism with the proper host of necessary political rights. Likewise, “liberal” refers to the host of values associated with it.

[2] This is, obviously, supposing that these trade unions are behaving by legitimate and economic means. In the “Forgotten Man,” Sumner attacks unions which restrict the free flow of labor, by limiting the pool of tradesmen in order to artificially raise wages.

[3] Some thinkers were nationalistic, like Bellamy; others were religious, like Ryan; and so forth.

[4] For space considerations, this analysis will not go past the Wilson administration.

[5] Edward Bellamy. “Looking Backwards.” In American Political Thought, ed. Kenneth Dolbeare and Michael S. Cummings, 293 (Washington, D.C.: CQ Press, 2004).

[6] Eric Foner, The Story of American Freedom, 144 (New York: W.W. Norton & Company, Inc., 2004.)

[7] Henry Demarest Lloyd. “Revolution: The Evolution of Socialism.” In APT, 304-305.

[8] Foner, 144.

[9] Debs, “Revolutionary Unionism.” In APT, 359.

[10] Eugene V. Debs, “Revolutionary Unionism.” In APT, 355.

[11] Foner, 129.

[12] Foner, 130.

[13] Wilson, Woodrow. “The Meaning of Democracy.” In APT, 393-395.

The post Anti-laissez-faire Ideas since the Founding: 1870-1918 appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/political-philosophy/anti-laissez-faire-ideas-since-the-founding-1870-1918/feed/ 0 65
A Critique of Bentham and John Stuart Mill’s Utilitarianism https://rareessays.com/philosophy/john-stuart-mill-utilitarianism/ https://rareessays.com/philosophy/john-stuart-mill-utilitarianism/#respond Mon, 07 Dec 2020 05:31:16 +0000 https://rareessays.com/?p=63 Utilitarianism is a philosophical epidemic in contemporary social and political dialogue. In one form or another, the notion of a “greater good” above the good of individual agents has taken root in group-centric ideologies. Dictators have invoked it on nationalistic or ethnocentric grounds; leftists have promoted it in the name of “mankind”; even the most […]

The post A Critique of Bentham and John Stuart Mill’s Utilitarianism appeared first on Rare Essays.

]]>
Utilitarianism is a philosophical epidemic in contemporary social and political dialogue. In one form or another, the notion of a “greater good” above the good of individual agents has taken root in group-centric ideologies. Dictators have invoked it on nationalistic or ethnocentric grounds; leftists have promoted it in the name of “mankind”; even the most individualistic of nations, such as the United States, are overrun by policy concerns for the “national interest.” Part of its appeal is intuitive: if happiness is good, and we want more of good things, then the greatest happiness is what we ought to look for. This looks decent enough in words, but it is only a connection of similar sounds that holds it together. What is meant by “happiness,” and what is the sense in which it is “good”? To whom does the “greatest happiness” pertain, and why “ought” we to pursue it? Utilitarianism, even in its most “complete” form, fails to address these questions with valid answers.

Overall, utilitarianism is a philosophical mess. Its moral implications are strange; its practice is filled with difficulties and absurdities; and its justification arises from, principally, nowhere. Mill’s treatment of utilitarianism, in particular, is one with egregious flaws. Its essential characteristics can be listed as follows: pleasure (in the non-degrading sense) is all that is intrinsically valuable; each person’s pleasure is valued equally, with allowance for “kind”; an action’s morality is measured by its positive productive correspondence with pleasure (for all); and agents must be disinterested spectators in order to consider their decisions correctly.[1] Problems with utilitarianism can be generally summarized in three, often inter-related, categories: the alienation of agents from themselves, problems of moral calculation, and lack of justification.

Consequentialist Utilitarianism

The doctrine of consequentialism[2] is the principal cause of bizarre moral demands expressed by utilitarianism. As Philippa Foot notes, there is a very compelling idea behind suggesting that no one would ever consider it morally acceptable to choose a worse state of affairs over a better one.[3] Using that idea as an argument against non-consequentialist theories, however, either begs the question or causes friction with pre-existing, weak intuition, depending on the senses of the words “better” and “worse.” If they are used normatively, clearly no deontological[4] moral theorist would reject a morally “better” state of affairs (more duties fulfilled) for a “worse” one – to do so would be antithetical to having a moral theory in the first place, and thus this case is trivial.[5] If they are used supposing some usage which is non-normative but somehow value-based, then they essentially ignore the moral theory under criticism, instead imposing a standard from a foreign hierarchy of values, i.e. a floating abstraction. For example, one might criticize Ayn Rand’s Objectivism for providing no political guarantee for the care of physically handicapped (beyond protection from coercion), indicating an alleged problem with the consequences of Objectivism. However, Objectivism establishes a direct relationship between its principles (e.g., non-interventionism) and concretes (e.g., nature of reality, nature of humans, validity of the senses, etc.), so this criticism merely involves some relativistic view of a “good” state of affairs being used as a standard to judge a reasoned moral theory.

Consequentialism affects the role of the agent in moral decision-making quite severely. Under consequentialist utilitarianism, an action’s morality can be defined thusly: that action is moral which brings about the intrinsically desirable state of affairs. An agent is as much morally responsible for those states which arise from his action as those from his inaction. Therefore, at any given point in time, he must decide which action will produce the proper state from that point forward, regardless of the past. An important observation here is that the actions of other agents are no greater factors in the agent’s decision than anything else, i.e. they are simply part of the environment in which he must bring about the best state of affairs.

An ethical puzzle for consequentialists

In other words, all considerations for an agent’s decision are exogenous: his concern ought to be to take what is given, and to produce the best outcome from it. Suppose a man, a proficient utilitarian named John Stuart, is walking down the street of a very happy society. Along comes a man of great evil who is part of an alliance of reputable wrong-doers, who presents the utilitarian with an ultimatum: “kill yourself, or I will kill ten people very much like you in this very happy society.” The evil alliance of wrong-doing does, indeed, have a strict policy of honoring its sinister promises when they are made, and has yet to break it. J.S., as a good utilitarian, knows that if he does not kill himself, he will be morally responsible for the deaths of ten very happy people, since he could have prevented it.[6]

Clearly, there is something amiss in this example. A determined utilitarian could say that the evil man is morally blameworthy for presenting J.S. with such a terrible situation, since either outcome (minus one person or minus ten people) reduces the greatest overall happiness. Nonetheless, this does not change the nature of J.S.’s decision: he must weigh the utilities at hand, which obviously point away from his continued existence, and make the right or wrong choice. The relevant implication is that J.S., as an agent, is alienated from his own person. He is not only responsible for his own actions, but for the actions of others (the evil man and the sinister organization). As Bernard Williams states, consequentialism implies that “from the moral point of view, there is no comprehensible difference which consists just in my bringing about a certain outcome rather than someone else’s producing it.” Because the focus of utilitarian moral action is to produce a specific end-state for everyone, J.S. (or anyone in his position) must put aside all of his projects (being happy, being a philosopher, being alive) when the ultimatum is issued to him and act as the projects of others demand.[7]

The calculation problem

The utilitarian calculus is also the source of much trouble. How practical is it to fully integrate all requisite information (the happiness of all others, the results of an action on it, etc.) into a properly weighted scale in order to make the right decision? Mill qualifies this problem by suggesting that it is a problem shared by all moral theories:

There is no ethical creed which does not temper the rigidity of its laws by giving a certain latitude, under the moral responsibility of the agent, for accommodation to peculiarities of circumstances and, under every creed, at the opening thus made, self- deception and dishonest casuistry get in.  There exists no moral system under which there do not arise unequivocal cases of conflicting obligation.

This contention does not suffice, however. It is indeed true that all moral theories encounter difficulties, but to invoke that as an absolution of utilitarianism’s difficulties is analogous to telling an upstart typewriter company in the 21st century, “every business has trouble and needs to invest at a loss when it gets started- don’t worry, go ahead and invest.” Utilitarianism requires a tremendous amount of input in order to properly discover the right decision, across many persons in both the short-run and long-run. The sheer volume of particulars in the world must be considered in every case, as opposed to the fewer particulars that would be necessary in, say, a natural law framework of morality. The conclusion is that the “calculated” utilitarian choice is often wildly divergent from its ideal, morally correct choice. Contrarily, in a natural law theory, someone makes a contract and knows to keep it because, via reason, he has concluded that contracts ought to be kept, and that in this specific situation he knows that it applies (for the obvious reason that he signed his name on a physical contract with another person). In the context of the theory, there is only a miniscule chance that he has made an error in his judgment.

The post A Critique of Bentham and John Stuart Mill’s Utilitarianism appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/john-stuart-mill-utilitarianism/feed/ 0 63
Rousseau on Represented Sovereignty in Democracy https://rareessays.com/philosophy/political-philosophy/rousseau-on-represented-sovereignty-in-democracy/ https://rareessays.com/philosophy/political-philosophy/rousseau-on-represented-sovereignty-in-democracy/#respond Sun, 06 Dec 2020 20:57:49 +0000 https://rareessays.com/?p=99 “…The moment a people allows itself to be represented, it is no longer free: it no longer exists.” A “pure democracy” interpretation of Rousseau could use this statement about representatives as evidence that The Social Contract is a manifesto of radical self-government. If we hold as an axiom from this interpretation that a person under […]

The post Rousseau on Represented Sovereignty in Democracy appeared first on Rare Essays.

]]>
“…The moment a people allows itself to be represented, it is no longer free: it no longer exists.” A “pure democracy” interpretation of Rousseau could use this statement about representatives as evidence that The Social Contract is a manifesto of radical self-government. If we hold as an axiom from this interpretation that a person under representatives (as one in the United States or United Kingdom, for example) is not free, we find that The Social Contract will present a myriad of practical and logical problems if it simultaneously asserts that people in any state can be free. Most people readily accept the notion of total democracy not being very feasible. Such a system would place high demands on citizenship, requiring full participation in both legislation and enforcement. Rousseau himself concedes that such a state would only be possible on a small scale. More importantly, such a restrictive view of sovereignty conflicts with almost any form of trusteeship in duty, including the hire of deputies to execute the will of the state, which would be a requirement for any concrete state—including those that Rousseau would advocate—to function.

Rousseau’s distinction between law and decree

The pure-democracy interpretation of Rousseau is likely false after a careful reading of Rousseau. While he may be biased in preference toward a “city-state direct democracy” orientation, that says nothing about his political theory.[1] The proper implication of his position toward representation is that while sovereignty can not be expressed through representatives, not all cases of the existence of “representatives” (in the broad sense of the word) entail a loss of sovereignty. At the center of this more refined interpretation are the specific meanings of the words he uses, marked by the strict difference between law and decree, Sovereign and government, and legislative and executive:

“When, for instance, the people of Athens nominated or displaced its rulers, decreed honors to one, and imposed penalties on another, and, by multitude of particular decrees, exercised all the functions of government indiscriminately, it had in such cases no longer a general will in the strict sense; it was acting no longer as Sovereign, but as magistrate.”[2]

In other words, the Sovereign is to law[3] as the magistrate is to decree. When Rousseau states “very few nations have any laws,”[4] he is not suggesting that few nations have rules and statutes, but that few have a core set of fundamental laws of governance in direct accordance with the general will. Because the general will does not deal in particular objects, laws do not either, and the government’s duties lie in decrees. The Sovereign is the entity that sets the laws, but does not set policy: that is the role of government.[5] Thus, it is not in conflict with Rousseau’s position to elect representatives of the people who determine necessary decrees, such as a formal treaty with another nation or a mandate for the construction of a bridge. In fact, this implies that Rousseau would possibly approve of a constitutional democracy similar to the one in the United States: the Constitution, which was approved via an act of the people’s sovereignty (the Constitutional Conventions), is the body of laws; and the executive, legislative, and judicial branches are tasked with representing the people in the construction of decrees.[6]

Given this account, Rousseau is certainly correct in asserting that sovereignty is not something that can (or should) be “given” to a representative, just as he argues that one cannot (or should not) sell himself into slavery. However, even this revision is in conflict with reality- not in its words, but in its implications. What Rousseau intends to suggest by his argument is that sovereignty, in the construction of laws via the general will, can not be represented. On the contrary, it is indeed possible for a man to exercise sovereignty via a representative, not only at the level of decrees, but also of laws.

The law-decree dichotomy can only take us so far, as it sets simply a difference in statements marked by the semantics of the sentences which they embody. This is because one who decrees must always do it in accordance with laws. Thus, when a decree states “1000 gold pieces shall be levied for the construction of a bridge on the river,” it holds implicitly in it the principles that taxation and expenditure for the construction of such projects are legitimate, in addition to the contingent facts of the bridge’s form or location. Though The Social Contract is not intended to carry any specific prescriptions for laws except those necessary for the general will (individual freedom in nature and Sovereignty), Rousseau acknowledges the existence of objective truths: “what is well and in conformity with order is so by the nature of things and independently of human conventions.”[7] This emphasizes rational thought as being at the forefront in the discovery and exposition of laws and decrees, and sets the critical basis for the remainder of this discussion.

The role of representatives as implicit law-makers

There is a common misconception that “principles” (i.e., laws in terms of this discussion) are a shortened, codified set of statements that are then “applied” to specific situations; however, embodied in a complete expression of these principles are the nuances that apply to particulars. The statement, “one should not kill another,” is not a principle in itself, but merely a brief, sentence-long summary of the principles related to the death of a man at the hands of another. A judge may inquire into a specific case and discover that man A killed man B, but only after B attacked him in an alleyway. He then concludes in the verdict that man A was behaving in self-defense, and thus acted lawfully. Suppose, however, we had more facts about the case: A had fought with B, defeating him and holding him at his mercy, but had killed B anyway. The verdict changes to man A having behaved legitimately in self-defense, but unlawfully in killing his incapacitated enemy in cold blood. Each additional particular fact present in the case, if when added alters or justifies further the verdict, is thus essentially a part of the principles; we could then revise our initial statement to read “one should not kill another, except in self-defense, which is defined as an imminent threat to one’s life.”

Therefore, the purpose of the judge is to ascertain the relevant particulars of a (new) situation and apply them against the principles; yet, in doing this, he is refining and adding to the existing set of laws. It is practically impossible for any body of people, no matter how large or how small, to prescribe the entirety of the set of all relevant lawful principles, any time from the outset to the demise of a society. Furthermore, it is logically impossible for absolutely no form of judgment to be required in observing factual data, ascertaining its characteristics, and matching its correspondence with the abstract law. To one degree or another, all deputies and representatives hired for the purpose of the executive branch of government (as Rousseau defines it) are law-makers in that they must behave lawfully in their positions but do not have prescriptions in totality for the numerous situations their duties encounter. Better said, it is frequently the case that a function of a government results in an exercise of the power of the Sovereign, in some positions more than others (a judge more than a janitor). As such, it is inevitable that the people will elect representatives or deputies who will not only function as executors of law, but as legislators of law. Represented sovereignty is the result, which—by Rousseau’s definitions—makes all men everywhere either slaves if they associate and create government, or subject to pure force in nature if they do not.[8]

Can society use representatives and maintain its sovereignty?

Is the above fact just an unpleasant reality? The necessity that compels a free people to appoint a military commander, a skilled diplomat, or a judicial bench is an understanding of the unequal distribution of talents (a fact which Rousseau acknowledges). The general will can be such that rationally-behaving people, recognizing their limitations, submit the more complex parts of necessary judgments to representatives. It is not only rational, but also well within their rights. If there were an issue of law too complicated for one man to understand and it were beyond him to ever achieve the necessary level of understanding (perhaps because he was too busy working to feed himself), what would be his proper action? Should he cast his vote by chance, or entrust his judgment to another, or neither? The first and last options are clearly irrational, leaving only the second in question, and it is indeed a rational choice. While the man may not understand the issue at hand, he understands that there is a question, that it has a rational and right answer, and that there are others whom he has observed produce the right answer on several occasions and whom he is willing to trust. He is simply integrating the best information he has to make the decision that he expects to generate the most accurate answer, and he is entirely justified and free in doing so.

Granted, if many people possessed understanding of little and did not even understand the notion of sovereignty itself, it would be extremely unlikely for a legitimate state to spring into existence if a state could ever be legitimate. Nonetheless, to suggest that someone who is of weaker rational faculty lacks the fact (or even the right) of sovereignty is absurd. So long as this person understands his own right to freedom (in nature) and his sovereignty, he does not forfeit these rights if he fails to comprehend the answers to necessary higher questions and willingly defers to another. If an educated elite fails to comprehend a higher question, does he then lose his sovereignty? And what if one smarter than him does so as well? The answer is no, because to be sovereign does not mean to possess a complete, correct, and self-determined opinion on all issues, but to possess an independent will which can freely associate and decide on how one should be governed. That Rousseau would have disputed my conclusion is a matter of historical argument; in the end, however, the implications of his statement that a “represented sovereign” is not free must be made clear.


[1]Marini, Frank. Midwest Journal of Political Science, Vol. 11, No. 4. (Nov., 1967), pp. 451-470.

[2] Rousseau, Jean-Jacques. The Social Contract. (New York: Dover Publications). p. 20

[3] From this point on, I will use the word “law” in the same sense that Rousseau uses it.

[4] Rousseau, p. 65

[5] Rousseau, p. 37-38

[6] In reality, the U.S. government does a lot of law-making of its own by morphing the meaning of the Constitution when it is convenient, in order to pass previously impassable decrees.

[7] Rousseau, p. 23

[8] Ironically, this conclusion could force Rousseau’s position to accept a direct participation model of democracy, which I just argued was not position taken in The Social Contract.

The post Rousseau on Represented Sovereignty in Democracy appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/political-philosophy/rousseau-on-represented-sovereignty-in-democracy/feed/ 0 99
A Functionalist View of Consciousness in Bicentennial Man https://rareessays.com/philosophy/a-functionalist-view-of-consciousness-in-bicentennial-man/ https://rareessays.com/philosophy/a-functionalist-view-of-consciousness-in-bicentennial-man/#respond Sun, 06 Dec 2020 19:20:02 +0000 https://rareessays.com/?p=96 Based on Asimov’s Bicentennial Man, the 1999 film Bicentennial Man stars Robin Williams as an increasingly lifelike robot. One fundamental question is: at any stage, does Andrew possess “consciousness”? Withholding the belief that humans over all other things have been supernaturally “chosen” or imbued with consciousness, a naturalistic assessment of qualitative experience permits Andrew to […]

The post A Functionalist View of Consciousness in Bicentennial Man appeared first on Rare Essays.

]]>
Based on Asimov’s Bicentennial Man, the 1999 film Bicentennial Man stars Robin Williams as an increasingly lifelike robot. One fundamental question is: at any stage, does Andrew possess “consciousness”? Withholding the belief that humans over all other things have been supernaturally “chosen” or imbued with consciousness, a naturalistic assessment of qualitative experience permits Andrew to possess it. If evolution or otherwise a long series of physical processes could produce consciousness[1], then there is little reason to believe that a very advanced robotic realization of the same property is not logically and physically possible. Neural chauvinism is described as it is because there is little argument otherwise. Functionalism can not correctly answer the question “does Andrew feel or not?” because within its framework, the question is nonsense.

Functionalism defines a mind as something with input, processing, and output, [2] but it makes no assertions about the specific physical content of any of those three categories. Qualitative experience is one, but not the only, means by which the mind can recognize and sort inputs.[3] It has its own set of possible physical manifestations (e.g., the evolutionary, biological conditions that led to human consciousness). Mental states are defined by their functional properties, which are defined by causal roles of different order. The highest order roles are those which immediately precede output, but extremely complex causal networks precede them (in minds worth investigating). The multiple realizability thesis applies to all causal orders, which include qualitative experience. This accounts for phenomena such as inverted spectra and is sufficiently abstract to accommodate all suggestible material structures. Just as different materials can realize the same structure of qualitative experience, by implication different structures of qualitative experience can realize the same mental states.

An important point to make here is that there is no absolute binary variable that determines “feelings” and “no feelings.” Consciousness is not a singularly defined characteristic, that is, there is not only one sole consciousness that a mind either possesses or does not possess. Qualitative experience merges continuously into the conceptual space of mental causal functions; it is one of infinitely different possible causal structures that can inhabit the mind. Logically, there is nothing essential or special about it to the definition of a mind (though it is extremely practical). In other words, the notion of qualitative experience itself is only a title we impose on causal properties. Only as a matter of convenience should we ever set qualia apart from other aspects of the processing functions of the mind. To ask, “does Andrew feel or not?” is a philosophically loaded question: it either presupposes that “feel” is binary and an exclusive descriptor of qualitative experience, or it is pointing to a specific definition of “feel” in the aforementioned continuum (but this is much less likely).

Suppose one asked, “is that chair red or not?” If the question were posed in the same sense as we are viewing the question about Andrew, it would be committing the same fallacy. Because the color spectrum is continuous, there is no absolute defining line between those colors which are red and the next class of color in the spectrum. In this case, the inquirer is appealing to some commonly accepted definition of red, for the ease of avoiding having to ask “Is the dress of light wave frequency range X-Y?” However, this social norm bears no absolute factual justification- it is based on pragmatic, short-hand linguistic expressions of color. The same applies to “feel”: it is folk psychology and a mere conversational convenience. Of course, “feel,” “qualia,” “qualitative experience,” and “consciousness” are still useful terms. After all, our subjective idea of qualitative experience is as different from involuntary functions of the mind (like heart-rate control) as “reds” are different from “blues,” despite the fact that there is no absolute dividing line between them. The distinction just needs to be made that these terms are contingent on whatever we decide are their most pragmatic uses.

Could Andrew possibly be a zombie? The first issue to handle is the notion of philosophical zombies altogether. Heil claims, “zombies satisfy all the functionalist criteria for possessing a full complement of states of mind.”[4] Some philosophers utilize this as an objection to functionalism, but the functionalist should be able to contend that a mind can exist without qualia. Given the functionalist definition of a mind in tandem with the possibility of different material states determining a single causal state, there is no reason why this can not be the case. As such, there is nothing philosophically hazardous in permitting such a broad definition for “a mind.” This allows for the possibility of behavioral zombies, whose outputs are functionally isomorphic to qualia-bestowed/cursed creatures.

In what seems like a conflicting view, Paul and Patricia Churchland argue that functionalism requires intrinsic properties to play causal roles, explaining that “our sensations are anyway token-identical with the physical states that realize them.”[5] This permits a 60 Hertz frequency spike, for example, to suffice as those properties. However, whether absent qualia is an issue for functionalism or not is ultimately decided by the specific definition of “qualia.” The relevant implication of both positions is that robot minds are not logically problematic, no matter how qualia are treated.[6]

Approximating its original intention, we can appropriately revise the question to something other than nonsense: “does Andrew possess the essential qualities commonly associated with human feelings?” Realistically, this is a question of empirics. He may very well be the logically possible behavioral zombie. First, a precise definition of the nature and breadth of the “feelings” must be decided. Then, the components of the mind in question must be tested to determine if those conditions are met.

The short answer is that it can go either way. Then, at each stage in his life, he can be tested for those characteristics. Was his positronic brain sufficient to give him human-like qualitative experience? Did the incorporation of a central nervous system guarantee it? In any case, there is no short-cut to a definite, final conclusion without scientific inquiry.


[1] I switch between the uses of “consciousness,” “qualitative experience,” and “qualia” frequently where I feel it is appropriate, but generally statements about them are in reference to the same logical argument. If anything sounds strange or incorrect, I can explain it.

[2] https://en.wikipedia.org/wiki/Functionalism_(philosophy_of_mind)

[3] I’m not sure what the official constraints of “Functionalism” are on what a mind can be, but this form permits something as simple as a circuit switch box to be a (very rudimentary) mind. By this, Andrew also has a mind.

[4] Heil, John. Philosophy of Mind: A contemporary introduction. (H, p. 124)

[5] Churchland, p. 353-354

[6] In other words, for any fixed definition of qualia, my position (appropriately adapted to the new definition) is most likely not at odds with the Churchlands’; we are only attacking the issue in different ways. For convenience, the rest of the paper should be understood in terms of my framework.

The post A Functionalist View of Consciousness in Bicentennial Man appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/a-functionalist-view-of-consciousness-in-bicentennial-man/feed/ 0 96
Ludwig Wittgenstein’s On Certainty, and how G.E. Moore Fails to Respond to the Skeptics https://rareessays.com/philosophy/ludwig-wittgensteins-on-certainty-and-how-ge-moore-fails-to-respond-to-the-skeptics/ https://rareessays.com/philosophy/ludwig-wittgensteins-on-certainty-and-how-ge-moore-fails-to-respond-to-the-skeptics/#respond Sun, 06 Dec 2020 07:28:08 +0000 https://rareessays.com/?p=79 Beginning with Descartes, traditional forms of epistemology have attempted to create a foundation of knowledge that can not be doubted. The skeptical tradition, employing and developing Cartesian doubt among other variations of it, has sought to undermine the possibility certainty about the external world and, more generally, all knowledge. The philosopher G.E. Moore attempted to […]

The post Ludwig Wittgenstein’s On Certainty, and how G.E. Moore Fails to Respond to the Skeptics appeared first on Rare Essays.

]]>
Beginning with Descartes, traditional forms of epistemology have attempted to create a foundation of knowledge that can not be doubted. The skeptical tradition, employing and developing Cartesian doubt among other variations of it, has sought to undermine the possibility certainty about the external world and, more generally, all knowledge. The philosopher G.E. Moore attempted to respond to skepticism by directly demonstrating his certain knowledge of the external world. As a response to skepticism and to Moore’s attempted refutation of it, Wittgenstein essentially argues that while there is no valid means to actually answer the skeptic, the skeptic’s claims are nonsensical in the first place. The skeptic can only have functional claims when the propositions they doubt are removed from all possible contexts, rendering them meaningless and requiring an invocation of logic external to language and human understanding. Fundamentally, Wittgenstein replaces the response to skepticism’s “you cannot know” by Moore’s “I do know” with what ultimately reduces to, “I do not need to ‘know’.”

Skepticism and logical possibility

While skepticism takes many different forms, the primary form of skepticism under consideration can be described by single, general argument. This skepticism’s basic premise is that we are unable to logically disprove possible states of affairs in the world that would undermine our claims to knowledge about reality (“skeptical possibilities”). Generally, arguments for skepticism, including the original Cartesian skepticism formulation, ultimately take the form of a modus ponens argument, such as,

  1. If I can not distinguish between dreaming and being awake, then I can not be sure I have a body.
  2. I can not distinguish between dreaming and being awake.
  3. Therefore, I can not be sure that I have a body.

Support for the second premise derives from the possibility that, for any empirical proposition we form at a point in time, events could follow that would provide evidence to falsify that belief. If this is true, no empirical proposition is verifiable and thus none are certain.

Wittgenstein does not disagree with this, to an extent; he grants that such subsequent falsifying events are indeed always a possibility. For example, one may have very good reasons for believing his old friend is standing in front of him, but it is imaginable for that person to suddenly start behaving as though he was not that old friend after all (613).[1] However, Wittgenstein challenges the notion that such events transpiring would undermine the relevant prior empirical beliefs about the situation. In other words, he argues that such possibilities do not undermine “knowledge,” in the meaningful sense of the word, but merely fail to satisfy the conditions of a notion of logic removed from practitioners of logic (human beings).

Wittgenstein on doubt

In the second paragraph of On Certainty, Wittgenstein elucidates the role of doubt, almost spelling out immediately what will become his objection against skepticism: “from its seeming to me – or to everyone – to be so, it doesn’t follow that it is so. What we can ask is whether it can make sense to doubt it [emphasis added]” (2). Though the skeptics are correct in questioning the assertion of seeming or “common-sense” empirical fact, such doubts fail to (meaningfully) endorse their assertion that all knowledge can be undermined. Hence, Wittgenstein attacks the core of “radical doubt” as non-sensical.

Primarily, the skeptics make the error of conceiving logic as an empirical statement – as something independent of the agent in question – that is subject to the possibility of falsification. The Tractatus, though earlier in Wittgenstein’s philosophical development, is particularly illustrative of this problem with skepticism: “Propositions cannot represent logical form: it is mirrored in them. What finds reflection in language, language cannot represent.”[2] Moreover, we cannot sensibly falsify (or take any other action standing outside of) logic, since we can not describe what a non-logical world would look like.[3] Yet this is precisely what skepticism demands.

Skepticism, by externalizing logic, thus encounters serious error when it casts extreme doubts upon common-sense propositions, which are necessary for establishing language (and hence the use of logic). When someone says, “There are trees,” he is presupposing the existence of objects. This is not to imply an epistemological assertion that there are objects in a specific sense of the word, but it simply reveals the absurdity of saying “objects do not exist.” If one holds that to be true, he runs into the intractable problem of explaining of what it is that one is speaking when one says “there are trees.” Day to day life demonstrates that common-sense propositions must be known in some way, as evidenced by the fact that we say things to others like “move that table over here” or “open the window” (7). In light of this, the nature of being mistaken about a statement like, “I am certain that these are words on this paper” is unclear (17, 24, 32). What it would be like to find out that “here is not a hand” is peculiar and seemingly indescribable by language. This is because the language-games people use, those ingrained deeply in their practices and beliefs, depend on affirming such propositions in order for them to make any sense (to be explained shortly).

Furthermore, as Wittgenstein asserts several times, the notion of doubt presupposes certainty (115 and elsewhere). In order for one to doubt anything, one must first have certainty about what he doubts, be certain that he, in fact, doubts it, and so on. This relates closely to the foundation of (the human expression of) logic in language, as implied in Tractatus. In Philosophical Investigations, Wittgenstein delves into the nature of language games, which later play an important role in On Certainty. Section 7 of Investigations states, “I shall call it the whole, consisting of the language and the actions into which it is woven, the language-game.”

Learning and language

 Wittgenstein explores how a child learns and the relationship between its learning and language in section 6 of the Investigations. A child learns what words mean by ostensive action; for example, one might instruct, “that is a chair; that is a car; that is red; etc.” In all this, however, there is a necessity for an understanding of ostensive definition itself. A child, to learn that “this is called ‘car’,” must first comprehend that names can be assigned to things. Later, in section 31, Wittgenstein uses an example of teaching someone how to play chess. When he points to a piece and says, “this is the king; it can move like this,…” the phrase “this is called the ‘king’” is only a definition if the student knows what a game is, what a piece in a game is, etc.

The point of the exploration of language games is, in short, that understanding requires some background of trust – some kind of sureness. Continuing in On Certainty with the case of the child, Wittgenstein says, “the child learns by believing the adult. Doubt comes after belief” (160). A child could never learn anything if he constantly questioned existence, for if that were to happen, he could never learn the definitions of things ostensively, just as if a person were to question the game or the pieces of chess, he would never learn that “this is called “the king” and it moves like so.”

The process of learning language is one of action (or reaction) first, then epistemological reflection at a later time once a system of beliefs is formed and it becomes gradually understood where doubt can be reasonable (538). For example, a child initially listens to verbal and written instructions, responding trustingly and candidly to what others say. When a child realizes that people have the capability to lie, however, he then has a reasonable basis for sometimes doubting the truth of what someone says. The system of belief he develops is essential to forming these kinds of curiosities and doubts. If he did not understand that other human beings like himself existed and behaved autonomously and with similar capabilities, he could not even begin to comprehend the notion of doubting the truth of their words. Moreover, even when he believed and spoke candidly, he would not have been able to do so had he questioned the existence of other human beings, and he would have not been able to understand the existence of other human beings if he questioned the existence of a world external to him.

Language is inextricably embedded into our lives. Without it, we would be unable to learn, and without learning, we would be unable to doubt. Further, it is the common understanding and foundations of language that allow human beings to communicate. Incidentally, by no means is the plain use of signs universally indicative of meaning (another basic idea explored in Tractatus that blocks a potential route for skepticism). A person who interprets and acts upon the mathematical directive “halve” by multiplying by three hundred is not casting doubt upon halving, but is merely out of sync with the rules and norms of a language-game. He is not presenting a skeptical challenge to knowledge of mathematics.

At the crux of his argument, Wittgenstein rejects the Cartesian-style premise that all propositions, even foundational ones, should be doubted along with any beliefs that they justify, unless they can be proven empirically. The skeptics’ doubt of these propositions does not merely test the truth, falsehood, or likelihood of those propositions, but ultimately necessitates questioning the methods by which testable empirical propositions are tested (317, 318). If all knowledge is based on testable empirical propositions that are justified by methods that are themselves subject to the skeptics’ pervasive doubt, then one must always acknowledge skeptical possibilities (i.e., the skeptics’ position is meaningful).

 Skepticism of the external world isn’t useful

To counter this, Wittgenstein explains that claims like “here is a hand” or “the world has existed for longer than five minutes” merely appear to be statements about the external world that are true or false. However, these propositions lie beyond knowledge or doubt, because they serve as the framework by which we can speak about objects in the world. He uses two metaphors: first, that these kinds of propositions are like a “river-bed” that allow the “river of language” to flow freely (97, 99); and second, that the propositions are like hinges on a door, which must be fixed in order for the door to function in any significant way (341, 343). These kinds of propositions ostensively defined; they are not making an empirical claim about the external world, but merely show an example and hence demonstrate how the statement is to be used. The possibility of language is not made by actual facts in the world (which the skeptic can always undermine), but by simply never calling into question those facts (creating the “river-bed”).[4] Thus, Wittgenstein does superficially agree with the skeptic that such foundational propositions lie beyond empirical verification, but questions the sensibility and usefulness of such an assertion.

The post Ludwig Wittgenstein’s On Certainty, and how G.E. Moore Fails to Respond to the Skeptics appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/ludwig-wittgensteins-on-certainty-and-how-ge-moore-fails-to-respond-to-the-skeptics/feed/ 0 79
Scholarship Submission: Ayn Rand Institute Atlas Shrugged Essay Competition https://rareessays.com/philosophy/scholarship-submission-ayn-rand-institute-essay-competition/ https://rareessays.com/philosophy/scholarship-submission-ayn-rand-institute-essay-competition/#respond Sat, 05 Dec 2020 19:18:02 +0000 https://rareessays.com/?p=73 Admin here. I want to tell all of you a story about this. I drafted this essay up in 2006, in the most religiously Objectivist fashion I could. It does amaze me just how competitive things were for this scholarship, which I believe was $2,000 at the time. Like all things you need to try […]

The post Scholarship Submission: Ayn Rand Institute Atlas Shrugged Essay Competition appeared first on Rare Essays.

]]>
Admin here. I want to tell all of you a story about this. I drafted this essay up in 2006, in the most religiously Objectivist fashion I could. It does amaze me just how competitive things were for this scholarship, which I believe was $2,000 at the time. Like all things you need to try when you’re young, you have to see if you’re the kind of guy who’s good enough to get discretionary scholarships. I had standard merit based scholarships, but never won anything that only went to a handful of people. Accordingly, all I got after submitting this essay was an offer to sign up for the Ayn Rand fanclub. I reflect on the excessive amount of time it took me to write it. Well, I guess your time just isn’t that valuable when you’re a college student, so still a worthwhile experience.

Prompt: At his trial, Hank Rearden declares: “The public good be damned, I will have no part of it!” What does he mean? How does this issue relate to the rest of the novel and its meaning? Explain.

When Hank Rearden sells four thousand tons of steel in mutual trade, he is tried as a criminal for breaking economic regulations. Rearden’s resounding declaration at this end of his speech during his trial carries two significant concepts that reverberate throughout Atlas Shrugged: the illegitimacy of the principle of collective good, and the removal of the sanction of the victim from acts of evil.

“The public good” is the political extension of the moral principle of “the greater good”- a concept that implies the existence of some single good that supersedes the good and well-being of the individual. This underlies one particular category of the philosophies that the novel’s heroes vehemently oppose: collectivism. Whether they truly believe in it or not, many of the villains in the book use the notion of collective good as their stated motive for action. It permits them to initiate the use of force against individuals in the name of a supposed higher justice. More importantly, the justice entailed by a “greater good” is so ambiguous and incalculable that it provides a blank check of moral authority to those who wield it, granting nearly any whim the ruse of righteousness.

Rearden had committed no moral crime. Much to the prosecutor’s confusion, he neither offers any official defense nor throws himself upon the mercy of the court. Defending himself, he argues, would not only be an admission of guilt, but sanction of the processes and principles that would be invoked to deprive him of his life, liberty, and property, only perpetuating “the illusion of dealing with a tribunal of justice” to onlookers. “If [my fellow men] believe that they may seize my property simply because they need it – well, so does any burglar. There is only this difference: the burglar does not ask me to sanction his act.” Through his denial of participation in his trial and his imposed duties of citizenship, Rearden affirms that he will not consent to involvement in the system of supposed values entailed by “the public good.” The trial is the defining moment in Atlas Shrugged in which Rearden, a victim of society’s false morality, wholly refuses to give sanction to his oppressors.

After his mills pour the first order of Rearden Metal, a triumphant personal success following a decade of hardships, Rearden goes home and gives his wife a bracelet made of his innovation. In return, his wife ridicules his romantic obsession with his work, while his mother and brother deride him for what are, in actuality, his virtues. What Rearden truly feels for his family is contempt, but he is also reluctant to take a stand against them; he cannot understand their actions, and thus feels obligated to tolerate (and support them) them for their weaknesses. Rearden remains a victim who allows legitimacy to his domination by his family and society, until he tells his brother Philip that his fate does not interest him anymore. He realizes that this was what had caused him to bear the condemnations for nearly ten years, and revokes his sanction.

The National Alliance of Railroads (an organization allegedly created to protect the welfare of the railroad industry) votes on and passes the “Anti-dog-eat-dog Rule,” a precaution claiming to eliminate “destructive competition” by not permitting more than one railroad to operate in a region. However, beyond its stated altruistic, common-interest motive, the rule’s passage is part of a personal deal between Orren Boyle and James Taggart to exert influence in Washington to cripple Rearden Steel in exchange for destroying Taggart Transcontinental’s primary competition in Colorado: Dan Conway’s Phoenix-Durango. When Dagny Taggart confronts Conway about fighting back to keep his railroad, he responds, “… they had the right to do it… I promised to obey the majority. I have to obey… It would be wrong. I’m just selfish.” Though he had been wronged by the expropriation of his line, Conway gives his predators sanction for their actions by not fighting the seizure of his property, and even recognizing their right to do it despite his beliefs to the contrary.

Quite differently from Conway, Ellis Wyatt issues an ultimatum to Taggart Transcontinental to serve his needs or be destroyed with him, essentially denying permission to the predators to harm him. This is reflected later on when he disappears, not only removing his productivity from the reach of the looter society, but also setting his oil wells on fire. His burning wells become known as Wyatt’s Torch, a symbol of defiance- a refusal of sanction that burns until the heroes can return to reclaim the world.

The ever-increasing regulations issued by the looter government in the name of “the public good” until and after the passage of Directive 10-289 destroy all remaining incentives for production and soon afterward halt the growth of the economy. Directive 10-289 gives the government a limitless range of “emergency” economic powers, freezes wages, consumption, and innovation. Employment becomes based on need instead of productivity, as determined by the Unification Board, while the new-found power of those residing in government is exploited to grant favors to friends and other preferred classes. Production and commerce come to a halt; quality plummets as the worst companies receive windfall demand for essential commodities; the good of the public becomes the good of the cronies and scavengers as wealth is pried from the corpse of what was once an economic giant. The philosophy of sacrifice results in a reality of destruction.

“I swear by my life and my love of it that I will never live for the sake of another man, nor ask another man to live for mine.” Though Rearden had not yet met John Galt at the time of his trial, he unsurprisingly reiterates a form of Galt’s oath when he rejects the “public good” that aims to shackle him to the needs of society. No matter how altruistically its actors behave, any system which emphasizes the moral righteousness of sacrifice is barbarous as a principle in compromising one individual for the benefit of another. Beyond that, it is corruptible. It serves as a means of exploitation of man’s passionate desire to live: the villains of Atlas Shrugged are numerous and of many philosophical colors, but they all are similar in advocating coercion as a means to their ends. The “public good” is merely one pretext by which they seek to guilt those who produce into volunteering their livelihood away without resistance. Should the productive reject the public and not proffer their labor, the villains then have no value to offer the individual- as one offers a value in exchange for another in mutual trade- but merely a “zero,” the promise that they will not kill, imprison, or otherwise harm the productive if they submit to their demands. When the Atlases of the world- the producers- shrug their burdens and consent to their subjection no longer, the villains will have no control over individuals who understand rationality and production as the only true means of survival. They will no longer possess an avenue to evade reality. They invariably must face that reason is law, that creation is sustenance, that A is A; to do otherwise is to perish.

The post Scholarship Submission: Ayn Rand Institute Atlas Shrugged Essay Competition appeared first on Rare Essays.

]]>
https://rareessays.com/philosophy/scholarship-submission-ayn-rand-institute-essay-competition/feed/ 0 73