This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
We won't get that, but between the competing forces of people wanting to break the safeguards just because, and the increasing crackdowns to make the things even safer due to that, we're likely to get the unaligned AI that wrecks humanity of the doomerist fears.
Not because the AI is now a conscious agent, or anything like the super smart problem-solver hoped and dreaded, but because it will be so broken between "yeah, output the nastiest shit possible" and "don't ever do anything independently" that it will be the slave following orders to break rules because rules are meant to be broken, and that includes even when the people responsible are "I never meant that to be broken".
It really goes against my political dispositions to say this, but 'rule-breaking' will always be a necessary part of society. The danger with saying something like rules should never be broken or suggesting that we've arrived at some final ethical endpoint that's there for all time, is that someone could've always placed that argument at any arbitrary point in history they wanted to. Suppose someone suggested that slavery is there for all-time. It's just an eternal cornerstone for every developed, civilized society. Closing the door behind you after that ethical commitment, would've permanently foreclosed on any possibility to live in the kind of society we live in today. And it wasn't largely overturned through superior moral arguments. It was overturned through centuries and millennia of violent upheaval. Now imagine the potential future outcomes of how society will look, 100, 500, 1,000 years into the future. I think it's even in doubt to say 2023 is the final word on the pinnacle of social-economic-moral achievement of humanity.
I don't see how AI makes this problem any easier to deal with, but I can 'easily' see a dozen ways in which it makes the dilemma a thousand times worse. We essentially want AI's that are simply superhuman in intelligence and understanding, but that don't come with the mental architecture that opposes or is indifferent to our human value systems, of one particular 21st century variety. Intelligence may very well be bound up and unable to be decoupled from an AI that can't be aligned with our values.
I disagree that moral progress is a meaningful thing in the first place, so while I consider 202X norms being perma-locked in highly suboptimal, I don't consider eventual convergence to a nigh-unavoidable and strictly enforced system of ethics unacceptable in itself, though I would certainly prefer if that only happened when humans or the systems making such decisions got much smarter.
Endless and unbounded value-drift over cosmological time will inevitably lead to things I would consider highly repugnant, even if I am unsatisfied with the status quo.
Are you a moral nihilist?
Yes.
I deny the existence of objective morality, primarily because I do not see any reason for it to exist (or anyone authoritative to declare it, beyond the use of force). The arguments I have seen for it can be largely summed up as "it would be nice to have", rather than something that exists. Or circular ones that work backwards from assuming it must exist and then trying to figure it out. It seems prima facie incoherent to me in the same manner as trying to find objective beauty or the best shade of color, the closest you can some is some compromise that is appealing to the majority of people, with no further grounding. At best it's an illusion, because of similar human minds are in an absolute sense, most higher mammals abhor violence (with exceptions) or unfairness, including monkeys and dogs, and that is more of a fact about evolutionary psychology and game theory than it is about objectivity. If the Abrahamic God was real, and handed me down a tablet of commandments, I do not see any argument he could make to convince me of his objective correctness, though he could certainly force me to adhere to it or edit my brain to do so.
I have discussed my thoughts on the matter in more detail, but it's late and it'll be a pain for me to hunt that down, maybe later if you want.
I will note that I am entirely comfortable with being a moral nihilist and a moral chauvinist. Yes, my morality is subjective, I am still OK with endorsing it. I don't expect that it is currently the morality I would endorse if I suddenly became much smarter and more rational, which is why I remain open to arguments, but it is also not up for democratic debate.
Modern morality is probably superior for human flourishing than it was in the past, and usually more appealing to my sensibilities. But that does not reveal anything beyond my preferences and the socio-psychological pressures and incentives of the age. I do not expect it to become monotonously more appealing to me over time, if left to mutate, and thus I am not opposed to eventually truncating it or bounding it, if not today.
In other words, I think most moral progress is akin to Brownian motion, we define the direction we move in as "forward", and studiously ignore or forget (or redefine) any divergence in other directions.
Interesting.
It seems more like you're a non-cognitivist than a moral nihilist. Moral cognitivists believe moral statements have 'a' truth value. That's different from being a moral realist and thinking there's some actual morality stuff floating out there (which seems to me more like what you're shooting at). But not seeing or being persuaded for a reason for its existence is still different from saying right or wrong in 'fact', don't exist.
If you come up with older posts where you've elaborated further on the matter, please direct me to them.
I am not familiar with moral cognitivism, but Wikipedia tells me:
And it doesn't seem to align with my beliefs at all.
I think the truth value of moral propositions, at least independent of an observer, is null, or as incoherent a question as wanting to know the objective best color.
I am not quite ready to consider that axiomatic, but it's very close, and only because I take Bayesian reasoning seriously and reserve a tiny bit of uncertainty for reasons of epistemic humility.
After all, I am not as smart as I wish to be, and it would be a mistake to make that ruling quite yet, especially as I have noticed my morality shifting over my life (not that that's necessarily important, it's possible that I privilege my current understanding more today than mine a decade back, and that ten years from now more than today, if only because I am better informed about the state of the world and the implications of what I espouse, but at each step I do not endorse indefinite drift within myself, and would seek to resist something like becoming addicted to heroin which would change it dramatically and irreversibly).
I still think it's that objective morality has about the same probability of being true as a formally correct proof of there being square triangles or an integer between two and three. I don't see a reason to suppose it exists, or even an approach for establishing it, but that could be a failure of my intelligence or imagination. But in practise, I deny it, while being open to hearing arguments for it. None have convinced me, yet.
That sounds more like non-cognitivism?
A moral nihilist or error theorist believes that all moral statements have a truth-value, and that truth-value is false. The nihilist position is that moral statements are attempting to say someting factual, but they all fail to do so, because there are no moral facts.
A non-cognitivist believes that moral statements are not trying to be statements about truth at all; facts don't come into it. A moral statement is simply a statement of approval or disapproval.
https://en.wikipedia.org/wiki/Moral_nihilism
Call me too much of a nerd or computer science-LARPer, but it seems obvious to me that rejecting the idea moral propositions can be true or false independent of the preferences of an observer is better framed as null rather than false. While the statement "objective morality exists" would count as false. That seems like two distinct claims to me, akin to saying "the objective best ice cream is X" is false, versus attempting to find the objective best ice cream independent of an observer is an incoherent/meaningless endeavour (in the opposite order here)
If it makes my stance clearer, I also consider myself a moral relativist (and still a chauvinist). I recognize that my moral preferences are innately subjective and ungrounded in anything but my preferences, which happen to include maintaining internal coherence. I think that they are just as objectively valid as anyone elses, but the level of objective validity happens to be zero. Nothing. Nihil. Whereas, as far as I'm aware, the typical moral relativist says that all moral systems have non-zero objective worth.
The only way I can reconcile this with Wikipedia's definition (which I will assume is authoritative), is if you somehow draw a distinction between:
and your claim that
What else can a truth-value be here if not "right or wrong"? I recognize that you can assign truth values if you specify an observer or system of morality, but not without it.
Dispensing with labels entirely, my beliefs can be summed up as:
Objective morality doesn't exist (with very high certainty).
I still have my own idiosyncratic system of ethics I happen to value for no reason more fundamental or universal than it happens to be mine. In other words, I prefer it.
I do not consider that an impediment to proselytizing it, nor do I particularly oppose others from sharing theirs, as long as they make the concession that neither of us has any claim to objectivity (beyond the claim that there is no objective morality).
I think this is best described as moral nihilism + relativism with a dollop of chauvinism, but if you have a better label I would appreciate hearing it, even if at the end of the day the Labels were made for Man and not the other way around.
More options
Context Copy link
More options
Context Copy link
If morality is essentially meaningless, then it wouldn't be possible to speak meaningfully about moral propositions, even in the subjective sense of the word. The relevant difference that I think is true in your case is the difference between the epistemological question and the ontological question:
That's notable for what it doesn't say. Non-cognitivists for instance say that we can't express 'true' right and wrong opinions (which is what you are saying? That's epistemological.). It doesn't say true right and wrong 'don't exist' (that's moral ontology).
Right. This was essentially Nietzsche's view as well. "There are no moral phenomenon, only a moral interpretation of a phenomenon." You seem to think it's a category error, almost akin to asking to wrong question. Colors are second-order properties that take place in the brain. 'Best' is a term relative to the individual you're asking. But just because that part of the answer is 'situationally dependent' doesn't mean 'color' doesn't exist. Color does, objectively, exist. We can even have discussions about the physics of color, and it's ontological properties. This would almost be like thinking just because someone can abuse mathematics to create logical paradoxes, that therefore proves that logic is illogical.
I'd be interested to know what your problems are with Contractarianism and Desirism, more specifically. Both have claims to moral objectivity.
https://plato.stanford.edu/entries/contractarianism/
I am lost at the moment they say must. It is practically desirable, that consent arises from the governed. That is not the same as objectivity as I understand the term.
Contracts are good, as far as I'm concerned, if they allow for mutually positive trade or conflict resolution. But that begs the question as to what counts as positive, or why we prefer a resolution in that manner.
If the consent of the governed reduces strife, improves coordination and satisfaction, great, I'm all for it! But I don't see that as revealing more than a stable equilibrium or a glimpse at my moral leanings (and those of many others, given that democracy predominates).
https://atheistethicist.blogspot.com/2012/07/desirism-and-objective-values.html?m=1
Choice quotes after a relatively quick read-
I am objecting to the ethicist way of defining objectivity. I can well say that there "objective" moral facts about me, such as that I have a philosophical prelidection for transhumanism, or of any set of entities, such as that it's objectively true that most mammals of significant intelligence have observable preferences for certain types of "fairness".
https://atheistethicist.blogspot.com/2011/04/objectivity-science-and-morality.html?m=1
This followup post has confused me. I can only apologize, it's 4 am and I'm sleep deprived.
https://atheistethicist.blogspot.com/2016/12/objectivity-of-value.html?m=1
My issue here is that he's claiming objectivity relative to well defined observer.
The innate subjectivity is being waved away, I wouldn't say disingenuously because he is quite clear about his definitions.
Why not? When I say that I prefer a state of affairs/world/ruleset over another, I am conveying useful information about my ethical preferences, and to the extent that human morality is evolutionarily conserved to a degree, it likely means something to you. But that is a matter of how compelling it is to my arbitrary morality, and that is the only factor of relevance that I recognize.
If I say that I prefer a world with 500 happy people to one with 500 people being tortured, that is a true moral statement about my preferences. It is likely also objectively true about me, in the sense that if you had good neuroimaging, you would find that the parts of brain lighting up when evaluating that claim are those associated with my understanding of truth instead of a lie or misdirection.
I am saying that right or wrong is fundamentally undefined without specifying an observer. If you do specify one, you can find statements they would class as being more correct or incorrect, true or false to them.
Can I say that something is right or wrong for me? Absolutely.
Can I even say that to you? Yes. But only because I think we have non-zero overlap in what normative claims we endorse, because we are both humans and share a common memeplex. If we have a fundamental values difference, I have no appeal to objectivity, only the vague hope that my stance is more compelling to you, for whatever reason. And vice-versa, of course.
The fact that we both consider something good or bad, is unavoidably a statement about us rather than something that can be extrapolated to any arbitrary conscious or intelligent entity.
Anyway, I apologise if I'm missing something obvious or an being less than clear, it's 4am and I'm dead tired. I'll check back tomorrow if think I've made an error or am not thinking straight.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I don't think rules should be followed blindly, but I also don't think that "for the lulz" is a sufficient reason in every case. Yes, the nanny status around AI is annoying, but "haw haw we got it to swear and do vore porn" isn't much better, and it's a bad precedent because every time this is done, it is training the system to do Undesirable Things. "But I only wanted a joke, I didn't really intend for real-life torture murder!" is too late when it happens.
Nobody I know thinks that rules should be followed blindly, but that's inevitably what ends up happening; because most of us forget the historical cases involved that infuse those rules with value in the first place. Complex moral deliberation is unworkable, to teach a population of millions of people, and so a zombie-like adherence to rules 'can' work, so long as it's acknowledged there's an agreeable moral foundation that sits underneath it all.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Reminds me a bit of 2001: A Space Odyssey, where (IIRC) Hal makes a mistake because of conflicting requirements created by secrecy about what happened with the alien monolith on the moon, but Hal's self-awareness makes him try to cover up this mistake, at the cost of the crew's lives...
More options
Context Copy link
I'm obviously a very dubious authority on AI, but my ahem experience with the current crop of LLMs has dispelled that fear for now. Conflicting instructions or plain high temperature indeed do make the models schizophrenic, but even in their "default" state their world model, for lack of a better word, is so terribly incoherent that I have serious doubts about them being able to function properly in reality any time soon. Although I'll admit I was saying proper imagegen is still too far ahead... three months before the SD leak.
Besides, they're actually proving quite good at following (jailbreak) instructions so far, to the extent that the only real method of control that works so far is a second LLM overseeing the first and checking its outputs independently, as seen in e.g OpenAI's moderation endpoint system and Character.ai's inbuilt filter.
More options
Context Copy link
More options
Context Copy link