MachineElfPaladin
No bio...
User ID: 1858
It seems like you don't, actually, understand what that comparative aside was doing, so let me restate it at more length, in different words, with the reasoning behind the various parts made more explicit.
I described a situation where a person generated object A by means of process B, but due to their circumstances the important part of their activity was process B, and object A was important mostly insofar as it allowed the engagement of process B. Since I judged this sort of process-driven dynamic may seem counterintuitive, I also decided to give an example that is clearly caused by similar considerations. Writing Hello World in a new language is a nearly prototypical instance of trivial output being used to verify that a process is being applied successfully. The choice of assembly further increased the relevance of "moderately experienced programmer checking that their build pipeline works and their understanding of fundamentals is correct".
In this context, the existence of the general case - and the fact that it is the typical example brought to mind by the description, as indicated by the name you selected - suffices to serve the purpose of the aside. I did not claim and did not need to claim anything about all instances of building Hello World in assembly; the idea that I was trying to is an assumption that you made.
I don't see the any difference. If you "assume X" it means you hold X as true without any justification, evidence, verification, or inference.
As I've seen the term used outside of logic, it only requires a lack of effort towards verification. You can have justification, evidence, or inference, as long as they are simple enough and easily-enough available. For example, I would find nothing unusual in a drive-by reply to this line consisting of the following sentence: I assume you didn't read the post very thoroughly, then, because the paragraph immediately below where your quote ends contains a distinguishing case.
You are assuming the general case.
Ah! I see the false assumption was "that you are intelligent enough to comprehend those kinds of comparative asides and familiar enough with conversational English to understand that loading them with caveats would draw too much focus away from the point they are supporting." Asides of that type are implicitly restricted to the general case, because they are intended to quickly illustrate a point by way of rough analogy, rather than present a rigorous isomorphism.
I call "not assume" "doubt", but it doesn't matter what you call it, the fact is that to write Principia Mathematica Bertrand Russell had to not assume
1+1=2
.
It does matter what you call it, especially if you haven't explicitly defined what you mean when you use the term you're calling it by, because people will generally interpret you as using the most common meaning of that term. And we can see the communication issues that causes right here, because there are two relevant meanings of the word "assume" in this conversation and the word "doubt" is only a good antonym for one of them, so it looks like you're conflating those meanings, unintentionally or otherwise.
To assume(everyday) something means approximately to act as if that something were true, without feeling the need to personally verify it for oneself.
To assume(logic) something means to accept it as an axiom of your system (although potentially a provisional one) such that it can be used to construct further statements and the idea of "verifying" it doesn't make much sense.
Doubt is a reasonable word for "not assume(everyday)," thought it's usually used in a stronger sense, but it's a much poorer fit for "not assume(logic)." The technique of proof by contradiction is entirely based on assuming(logic) something that one is showing to be false, i.e. that one does not assume(everyday).
Russel himself is a good example of the inequivalence going the other direction. What would he have done if he had managed to prove 1+1=3 with his logical system? I can't be completely certain, but I don't think he'd have published it as a revolution in mathematical philosophy. More likely, he'd have gone over the proof looking for errors, and if he couldn't find any he'd start tinkering with the axioms themselves or the way in which they were identified with arithmetical statements to get them to a form which proved 1+1=2 instead, and if that failed he'd give them up as a foundation for mathematics, either with a grumbling "well I bet there's some other way it's possible even if I wasn't able to show it myself" or in an outright admission that primitive logic doesn't make a good model for math.
In other words, even though he didn't assume(logic) that 1+1=2, his assumption(everyday) that 1+1=2 would be so strong as to reverse all the logical implication he had been working on; a "proof" that 1+1 != 2 would instead be taken as a proof that the method he used reached that conclusion was flawed. This is not a state of mind I would refer to as "doubt."
much in the same way that the point of "coding Hello World in assembly" is not "coding Hello World in assembly" but "coding Hello World in assembly."
You are making a very obvious assumption there.
Yes. I assumed that you have enough in common with me culturally to know what "Hello World" and "assembly" are in the context of coding, why "Hello World" is a nearly useless program in the vast majority of contexts, and that people frequently practice new programming languages by writing programs in them with little regard for the practical use of those programs; that you are intelligent enough to comprehend those kinds of comparative asides and familiar enough with conversational English to understand that loading them with caveats would draw too much focus away from the point they are supporting; and that you are here to have a constructive conversation instead of deliberately wasting people's time. If I'm wrong about any of those I will be happy to be corrected.
I think you have a fundamental misunderstanding of what Bertrand Russel was doing when he proved 1+1=2. From an earlier work of his which effectively turned into a preface of the Prinicipa Mathematica:
The present work has two main objects. One of these, the proof that all pure mathematics deals exclusively with concepts definable in terms of a very small number of fundamental concepts, and that all its propositions are deducible from a very small number of fundamental logical principles, is undertaken in Parts II–VII of this work, and will be established by strict symbolic reasoning in Volume II.
The proof was not to dispel doubt about the statement 1+1=2, but to dispel doubt about the system of formal logic and axioms that he was using while constructing that proof. "1+1=2" was not a conundrum or a question to be answered, but a medal or trophy to hang on the mantle of mathematical logicism; much in the same way that the point of "coding Hello World in assembly" is not "coding Hello World in assembly" but "coding Hello World in assembly."
Russel was showing that you could lower the "basement" of mathematics and consider it as starting from another foundation deeper down from which you could construct all mathematical knowledge, and to do that he had to build towards mathematics where it already stood.
(Then Kurt Gödel came along and said "Nice logical system you've built there, seems very complete, shame if someone were to build a paradox in it...")
- Prev
- Next
My assessment of you has shifted far enough towards "troll" that I won't bother replying to you again.
More options
Context Copy link