Don't know if I am preaching to the choir, but with how much libs try to use the trolley problem to support their favorite war criminal, it got me thinking just how cringe utilitarianism is.

Whatever utilitarianism may be in theory, in practice, it just trains people to think like bureaucrats who belive themselves to be impartial observers of society (not true), holding power over the lives of others for the sake of the common good. It's imo a perfect distillation of bourgeois ideology into a theory of ethics. It's a theory of ethics from the pov of a statesman or a capitalist. Only those groups of people have the power and information necessary to actually act in a meaningfully utilitarian manner.

It's also note worthy just how prone to creating false dichotomies and ignoring historical context utilitarians are. Although this might just be the result of the trolley problem being so popular.

  • Sodium_nitride@lemmygrad.ml
    hexagon
    ·
    3 months ago

    I don't think you are using the word deontological correctly here. A deontological theory is one where you have a moral obligation based on the type of action you have performed rather than its concequencues

    Now you could theoretically make a theory where you first use utilitarianism (and an all knowing computer) to determine the goodness of any possible action, then make a deontological imperative to do those actions at the times and location where they produce good results. I have thought about how to make the 2 theories compatible as well.

    However, meaningfully, for a human with limited knowledge, utilitarianism and deontology aren't going to be isomorphic unless you really strecht the meanings of those 2 systems. A human will never be able to come up with a deontological ruleset rich enough to maximise utility in every possible situation they will encounter.

    I think it would be more accurate to say that utilitarianism in general can simulate deontology by assigning utilities to the type of action a person performs. For the lack of a better term, I would consider utilitarianism a "Turing complete" moral theory, while deontology would be closer to combinatorial logic in moral terms.

    • Tomorrow_Farewell [any, they/them]
      ·
      3 months ago

      I don't think you are using the word deontological correctly here. A deontological theory is one where you have a moral obligation based on the type of action you have performed rather than its concequencues

      The thing is, we can always make an axiom of a consequentialist code of ethics (just in case, I use expressions 'system of morality', 'morality system', 'code of ethics' interchangeably) into an axiom of an equivalent deontological code of ethics by just saying that, (I am going to use square brackets '[', ']' to denote parts of the 'instead' clause for better clarity here) instead of [an action being good because it has such-and-such consequences], [you have a duty to perform actions that are evaluated to have such-and-such consequences]. I suppose, that does mean that it is possible for one action that is not evaluated to have particular consequences to lead to those consequences, and for an action that was evaluated a priori to lead to particular consequences to not actually lead to those consequences, and this is a refutation of my original claim, as these systems can end up with different a posteriori descriptions.

      However, I do posit that, in a sense, there is still no significant difference in how deontological and consequentialist systems of morality work prescriptively, as you can't actually know the future with absolute certainty, and every principled subscriber to a consequentialist code of ethics is going to act in accordance to what I previously called an 'equivalent deontological code of ethics' - they will try to evaluate an action's consequences a priori, and act in accordance with said predictions.

      However, meaningfully, for a human with limited knowledge, utilitarianism and deontology aren't going to be isomorphic unless you really strecht the meanings of those 2 systems. A human will never be able to come up with a deontological ruleset rich enough to maximise utility in every possible situation they will encounter

      I mean, you won't be finding many people who actually adopt explicit codes of ethics in general, whether deontological, consequentialist, virtue-ist, or a mix of any of those, especially one that they would actually follow.
      Also, I'm not sure why you think we can't just find a deontological code of ethics from a given utilitarian one. You basically just add 'you have a duty to do actions that satisfy such-and-such criteria for good actions in this given utilitarian code of ethics'.

      I think it would be more accurate to say that utilitarianism in general can simulate deontology by assigning utilities to the type of action a person performs

      I mean, we can also find a deontological code of ethics equivalent to a given utilitarian one using the method that I have outlined previously. It can even be done with just one additional axiom.

      For the lack of a better term, I would consider utilitarianism a "Turing complete" moral theory, while deontology would be closer to combinatorial logic in moral terms

      There are better analogies that would involve just using set theory, if I understand correctly what you are trying to say - that every deontological code of ethics has an equivalent utilitarian code of ethics, and not vice versa. I disagree, as I have provided a method for finding a deontological code of ethics that is equivalent to a given utilitarian one.

      Also, neither utilitarian, nor deontological codes of ethics intersect well with virtue-based codes of ethics, as those tell us whether people are good or bad, and not whether actions are good or bad.

      • Sodium_nitride@lemmygrad.ml
        hexagon
        ·
        3 months ago

        I mean, you won’t be finding many people who actually adopt explicit codes of ethics in general

        Yeah, this is just theoretical. I am pretty much assuming we are talking about computers calculating morality here rather than actual people.

        There are better analogies that would involve just using set theory

        Yeah, scratch my analogy. It's actually kind of terrible.

        instead of [an action being good because it has such-and-such consequences], [you have a duty to perform actions that are evaluated to have such-and-such consequences].

        Well, the latter is just called act utilitarianism, which is more or less any moral system which is deontological but tries to approximate utilitarianism. Basically, you create a deontological ruleset which tries to predict in advance what maximises utility.

        It only approximates utilitarianism as I see it because once a deontological ruleset is laid out, you can't change it. If you then encounter an action which will have negative conquerors, but you should do according to your ruleset, you have to do it, or else you are just doing utilitarianism and calling it deontology.

        You can improve the approximation arbitrarily by making a richer and richer ruleset, but this requires more and more knowledge and computing beforehand.

        • Tomorrow_Farewell [any, they/them]
          ·
          3 months ago

          Well, the latter is just called act utilitarianism

          Given that, under that system, you have a duty to do something, it is deontological.

          Basically, you create a deontological ruleset which tries to predict in advance what maximises utility

          It's not really predicting anything, not necessarily. Who or what evaluates the possible consequences of an action can be determined in a lot of ways that are more granular than just being on a per-rule basis.

          It only approximates utilitarianism as I see it because once a deontological ruleset is laid out, you can't change it

          The same applies to all the other codes of ethics, considering that they are just systems of logic.

          If you then encounter an action which will have negative conquerors, but you should do according to your ruleset, you have to do it, or else you are just doing utilitarianism and calling it deontology

          If the predictions regarding the consequences of a particular actions are reevaluated and it is no longer a good action under a given utilitarian code of ethics, it also becomes a non-good action under the equivalent deontological code of ethics/you no longer have a duty to perform it.

          I think, you have a very narrow and naive view of deontology, which seems to often be instilled in students when Kantian deontology is taught to them using very primitive examples.

          But also, I very much do posit that, as prescriptive systems, deontological ones, consequentialist ones, and deontological-consequentialist mixed ones can't really be distinguished in any significant manner. As such (provided that we are working with a utilitarian system that does not involve any elements of virtue-based codes of ethics), I can just say 'utilitarianism is just deontology in a trench coat'.

          You can improve the approximation arbitrarily by making a richer and richer ruleset

          I have provided a method for finding an equivalent deontological code of ethics that differs from the original one in terms of the cardinality of the set of rules by an addition of just one axiom.

          • Sodium_nitride@lemmygrad.ml
            hexagon
            ·
            3 months ago

            I think, you have a very narrow and naive view of deontology, which seems to often be instilled in students when Kantian deontology is taught to them using very primitive examples.

            Maybe this is the case.