Regarding Lord of the Rings: if she's expecting it to be more hobbit, she will be surprised, what with it being slower and more epic (grander, more serious) in feel.
They elsewhere clarified that that was a typo, I believe.
Why word count and not syllable count?
And ancestrally there's also a pretty large share of German and French (Huguenot) ancestry, and a small sliver of various other races (the Khoekhoe peoples native to the Cape, who are the largest portion of the ancestry of the Coloured population (Coloured≠Black), as well as from various slave populations from elsewhere in Africa or Asia). It's not like Afrikaners are pureblooded Dutchmen.
Do you have much of a sense of what the college of cardinals are like? If he were to die, what might the next pope look like?
It would be worse if you scaled it up, though. Or were you planning just that size?
Yeah, this is funny, but a horrible experience to live in.
No one's talking about the most important thing: if this goes through, it would be the first time in over 200 years when the sun sets on the British empire.
"sooner or later"
Have you noticed that timelines are very relevant to my argument?
Okay, let's look at a particular possibility. Do you think there's a chance that Elon Musk would be in control? Given how much he's a fan for e.g. polls, he seems unlikely to be the mass-killing style of ruthlessness.
There are some bad assumptions there. First, that the people making the AGI are ruthless and not utopian is by no means a given. Second, how certain are you that we'll get an AGI (in the sense that, so productive that what you can offer goes to zero) in your lifetime? At least, why is this so high that you're not considering alternative possibilities?
But if they're arguing over things like how to interpret some portion of a statute, or what events occured, what's needed is generally not valid deductions, but probabilistic inference.
To agree to some extent with @Glassnoser, how well does teaching logic work? In my experience, you usually don't need anything particularly complex, and it's always been very intuitive to me, such that there hasn't really been a need to precisely identify logical errors or forms. But I get, I guess, that not everyone has the same measure of logical intuition. Can it be taught in an effective manner that leads to an intuitive understanding of logic, without requiring explicit, tedious consideration? What made logic so impactful for you? How did things change?
Not by coincidence? Why should those match?
I think many people with power care for those without.
Sorry, it's still kind of crazy to me that you describe your preferred path as 99.9% as bad as simple destruction. If that's still your preference, that's a remarkably strong confidence that you'd be destroyed otherwise.
That's an awful lot of the way to death! Then why not just live your life? Do you think that all normal humans are about to be exterminated within your lifetime?
Why is destroying yourself in the way you described any better than suicide?
that comprise human values.
There is not a single version of this. Many humans have crazy values.
Hmm. Estimated order of importance to know:
- Effect (noun)
- Affect (verb)(1)
- Affect (verb)(2)
- Effect (verb)
- Affect (noun)
All of these are common enough that I'd expect most people who are very literate to know each of them, but the first two are far more common than the last three, and the last least of all.
But, also, affect as a noun sounds totally different from all the rest, so it's hard to confuse.
Not sure what mnemonics are good. When you affect something, you effect effects.
It doesn't disagree with anything I said. I was pretty clearly (especially in context) addressing a comment advocating an unbounded use of AI, as long as the posts it produces are of quality. I address that, but your AI's comment in no way interacts with that. The only disagreement inserts the word inherently when I said no such thing, and addresses situations that I didn't care to talk about here.
That's not really worse than typical comments, in that people will frequently just respond to particular features of some comment in ways that weren't salient in context, but, if it could have chosen out of anyone to disagree with, the AI could have done a lot better than not actually disagreeing with me.
I assume you want to use the caveats that it can be used for research purposes or thinking through things, just not copy-pasting text?
the end of my interest in a thread and a sharp drop in my respect for the user
This is because it indicates that the other user is not particularly engaged. Ordinarily, if I'm having a conversation, I know that they read my response, thought about their position, and produced what they thought a suitable response was. If an AI is doing it, then there's no longer much sign of effort, and it's fairly likely that I'm not even going to convince them. This point should be expanded upon—if, often, much of what fuels online discourse is "someone is wrong on the internet," then that would no longer be a motivation, since it's not like they'll even hear the words you're saying, but just feed it into a machine to come up with another response taking the same side as before. You may retort that you're still interacting with an AI, proving them wrong, but, their recollection is ephemeral, and depending on what they're being told to do, they will not ever be persuaded, regardless of how strong your arguments and evidence are.
Otherwise, long form content - the hallmark of much of the best content here - is immediately suspicious, and I am likely to skip it.
Currently, length indicates effort and things to say, which are both indicators of value. Not perfectly, but there's certainly a correlation.
With liberal use of AI, that correlation breaks down. That would considerably lower the expected value of a long-form post, in general.
It sounds like you just feel entitled to an arbitrary terminal preference. That's not compelling.
At least you hedged it with "it sounds," but I don't think the preferences are arbitrary.
It's also that they're AI. If the goal of a discussion is to produce content, then, sure, a good AI could do that just as well. But if the goal is to have a conversation between people, then AI allows people to avoid ever putting their own ideas in contact with others, they can just offload it to the AI. They can have the AI never change its mind, if they like.
I at least agree that it should at most be included where people would be okay with using another human's text. But I think that there are still some cases where people might be okay with you quoting someone, but would not be okay with you using an AI to write up a quote.
There's actually also affect as a noun. I come across all of them every so often.
All the uses:
- Affect (verb): Influence, have any sort of impact upon. Pollution affects your health.
- Affect (verb)(second meaning): assume a behavior as a display. He spoke with an affected British accent.
- Affect (noun): emotional state. Mostly used in psychology-ish settings. Unlike all the others, the accent is on the first syllable: /ˈæ.fɛkt/. I couldn't think up a natural example, so from an online dictionary:
Evidence from several clinical groups indicates that reduced accuracy in decoding facial affect is associated with impaired social competence.
- Effect (noun): The result of some action or occurence. The effects of rent control have been studied quite well enough, thank you.
- Effect (verb): Bring to accomplishment, cause. Lincoln's emancipation proclamation did not at once effect freedom for all the slaves, seeing as the local authorities were not exactly inclined to listen.
Not sure what the ideal length would be, but I had the same thought.
- Prev
- Next
Keep in mind, as you're trying to beat the market, that short and long term trades may be taxed differently, and adjust what your meaning of "beat" is accordingly.
More options
Context Copy link