@Glassnoser's banner p

Glassnoser


				

				

				
0 followers   follows 0 users  
joined 2022 October 30 03:04:38 UTC

				

User ID: 1765

Glassnoser


				
				
				

				
0 followers   follows 0 users   joined 2022 October 30 03:04:38 UTC

					

No bio...


					

User ID: 1765

There is no scenario where Canada becomes part of the US voluntarily. It just isn't politically possible. Canada has a deep-seated anti-Americanism, which doesn't normally manifest as hate towards the US, but it does manifest as a deep conviction to never be part of the US.

Remember, Canada was largely founded by Americans who were loyal to the Crown during the American Revolution and established new settlements in a freezing cold theretofore sparsely populated territory. It is the only country that was founded in explicit opposition to the founding principles of the US. And then followed two hundred and seventy years of selective migration of Canadians who did not care about this out of the country into the more prosperous and warmer US.

Today, the politics are very different, but not being American is still the single core defining feature of our national identity, which we latch onto because we are culturally so similar. Quebec is another story, in that they have a different ethnic origin and a separate national identity, but they only make voluntary annexation more certainly impossible, because a change to the constitution of this kind would require unanimous agreement by all ten provinces. And if English Canada defines itself by not being American, modern Quebec defines itself by its French language and there is no more sacred political principle in Quebec than the belief that the French language must be protected by law. These laws would undoubtedly violate the first amendment. They violate Canada's own constitutionally protected freedom of expression, but Quebec sidesteps that using the notorious notwithstanding clause. Quebec will not join the US and be forced to give them up.

No amount of economic pressure is going to make Canadians want to give up these cherished identities. For most of our country's history, Canadians have been able to increase their incomes substantially by moving to the US. The profesional class in Canada can still do this, and there is still a significant brain drain. As irrational as it may seem, the ones who remain do not care as much about their material well-being as they do about preserving their independence and national identity, even if they associate it with ideas about peacekeeping and free healthcare rather than loyalty to the British Crown.

Annexation is extremely unpopular and there is an absolute determination not to get stuck with what is regarded here as a seriously dysfunctional political culture.

The tariffs the US just announced are about 10 to 20 times larger than the tariffs that most other countries have on US goods.

Firms that sell goods at the marginal cost of production deserve to survive.

All voters know is that they're not corporations.

Unless the market expected the tariffs to be worse and is reacting negatively to the badly to the news that they're lower than expected. Of course, I don't really think that's what's going on.

GDP per household is $224,000 per year.

Companies may wish to domicile in your country (especially if you have low corporate income tax rates) in order to access your consumers and/or workforce.

Foreign companies will sell less, not more to Americans, unless they crowd out domestic production (which tariffs necessarily reduce) to sell domestic goods instead of imports, but since overall domestic production would be lower, you don't benefit from this. Tariffs can in no way move production into the country on net. They only change what is produced (e.g. replace services with manufacturing)

If everyone else is doing tariffs except you, then the economy is already distorted; and implementing reciprocal tariffs may "un-distort" the global economy.

No, they can't. They can only add to the trade barriers and add to the distortion. The only way to undistort the economy is by subsidizing trade, effectively paying tariffs for foreign companies, but that just allows other countries to extort you.

If you want to raise revenue and you don't fear a trade war, tariffs may have less of an impact on GDP as other methods of taxation (eg, income tax).

The income tax certainly has a greater impact on GDP because it is easier to avoid buying imported goods than to not work.

If you are going to do protectionism, tariffs are better than subsidies.

Tariffs will change the relative cost of goods, but being a tax they should be net deflationary rather than inflationary.

There is, in effect, very little difference. Subsidies send money to other countries whereas taxes take money from other countries, but most of the tax is incident on the consumers within your country, so the difference is small.

Tariffs allow other taxes to be reduced whole subsidies require other taxes to be raised, so the effect on purchasing power is about the same. If one is more inflationary than the other is unimportant.

What makes this classified information? The actual targets were not specified.

I agree with this.

Why has the rule enforcement been so lax ever since we moved from /r/themotte? Or is that a false impression?

Why is it a problem at all?

I think you're missing the point which is that this violates her first amendment right to freedom of speech.

We are taught about it as one of the reasons for the War of 1812, when the US tried and failed to conquer what would become Canada.

I think his intelligence is greatly overrated. What is this high opinion many have of him based on? He made an incredibly stupid comment about how a job is worth a million cheap toasters or something and from that point on, I have not thought he is particularly smart. Sure, he might be a bit smarter than your average politician, but that is a low bar.

He seems to me like someone who is interested in ideas and has some half-decent debating skills, but he is not especially good at actually thinking.

Using the Poisson distribution, I think it's somewhere around 3-7% depending on how you do it. So it's very fishy.

Is this a deliberate pun?

What is a VA?

You also save an enormous sum of money don't you?

Yes, agency. How do you know they know how to implement it?

there's many, many people who think like Ziz

I put very little weight on this. It seems obvious to me that it's just become a sort of ingroup belief that it is now trendy to have. Ten years ago, it was the opposite. Virtually everyone in AI research found the idea to be ridiculous. Within the last few years, the balance of opinion has changed without any significant relevant new information on what AI will be like.

Every estimate that is actually based on people betting real money or which weights estimates based on the predictive abilities of those providing the estimates gives a very low probability of this happening.

The answer is that he doesn't understand Chinese, he plus the room understand Chinese.

I use AI a lot at work. There is a huge difference between writing short bits of code that you can test or read over and see how it works and completing a task with a moderate level of complexity or where you need to give it more than a few rounds of feedback and corrections. I cannot get an AI to do a whole project for me. I can get it to do a small easy task where I can check its work. This is great when it's something like a very simple algorithm that I can explain in detail but it's in a language I don't know very well. It's also useful for explaining simple ideas that I'm not familiar with and would have to look up and spend a lot of time finding good sources for. It is unusable for anything much more difficult than that.

The main problem is that it is really bad at developing accurate complex abstract models for things. It's like it has memorized a million heuristics, which works great for common or simple problems, but it means it has no understanding of something abstract, with a moderate level of complexity, that is not similar to something it has seen many times before.

The other thing it is really bad at is trudging along and trying and trying to get something right that it cannot initially do. I can assign a task to a low-level employee even if he doesn't know the answer and he has a good chance of figuring it out after some time. If an AI can't get something right away, it is almost always incapable of recognizing that it's doing something wrong and employing problem solving skills to figure out a solution. It will just get stuck and start blindly trying things that are obviously dead-ends. It also needs to be continuously pointed in the right direction and if the conversation goes on too long, it keeps forgetting things that were already explained to it. If more than a few rounds of this go on, all hope of it figuring out the right solution is lost.

I'm not sure how the minutia of laws were well-known back then.

Maybe I'm misunderstanding the question, but laws are organized into books that are indexed. You look up the relevant statute, search for the right section, and then read a few paragraphs describing the law. If you need to know the details of case lawyer, you consult a lawyer. They go to law school and read relevant cases to know how judges are likely to rule on similar future cases.

You still need lawyers to do this because ctrl-f doesn't return a list of all the relevant legal principles from all the relevant cases.

There also has been a massive explosion in the number and complexity of laws since the word processor was invented.

If it is missing a crucial characteristic of human intelligence, how can you say it has AGI? I can believe it could do well on an IQ test, but given that it has a totally different distribution of abilities, the relevance of those tests for AI is very low. The predictive power of an IQ test result for AI is dramatically lower for an AI. So while it might get a 120 IQ result, it is in no way as competent as an actual person with an IQ of 120.

Once it gets agency, we will probably discover something else it is lacking that we didn't think of before. So we can't even modify the test to include agency to get something as good as an IQ test is for humans. We need to remember that it has a different skill distribution and one which we are discovering as it improves. That's why the ultimate test has to be the broadest set of possible tasks that humans do, not these narrow tests which happen to be highly predictive for human abilities.

No AI has ever passed a Turing Test. Is AI very impressive and can it do a lot of things that people used to imagine it would only be able to do once it became generally intelligent? Yes. But has anyone actually conducted a test where they were unable to distinguish between an AI and a human being? No. This never happend and therefore the Turing Test hasn't been passed.

The entire point of the Turing Test is that, rather than try to define general intelligence as the ability to do specific things that we can test for, we define it in such a way that passing it means we know the AI can do any cognitive task that a human can do, whatever that might be, without trying to guess ahead of time what that is. We don't try to guess the most difficult things for AI to do and say it has general intelligence when it can do them, or else we end up making the mistake that you and many others are making where we have AI that can do very well in coding competitions but cannot do the job of a low level programmer or it can get high marks on a test measuring Ph.D. level of knowledge of some subject, but it can't do an entry level job of someone in that field.

Humans have always been and continue to be really bad at guessing what will be easy for computers to do and what will be hard, and we're discovering that the hardest things for computers to do are not what we thought, so the Turing Test must remain defined as a test in which the computer passes if it is indistinguishable from a human being. That is not the same as sounding like a human being or doing a lot of things only humans could do until recently.

It is still trivial to distinguish an AI from a human being because it has a very distinctive writing style that it struggles to deviate from, it cannot answer a lot of very simple questions that most intelligent people can answer, and it refuses to do a lot of things of things like use racial slurs, give instructions for dangerous actions, and answer questions with politically incorrect answers.

We shouldn't be too surpised that AI can do well on these benchmarks but not lead to massive productivity increases because doing well on benchmarks isn't AGI. There aren't very many jobs that consist of completing benchmarks.

AI is still pretty dumb in some sense. The latest estimates of the number of neurons these models have that I've heard are on the order of 2 trillion. That would make it about as smart as a fox. That's smarter than a cat but dumber than a dog. If a company said they were investing in dog breeding to see if they could get them to replace humans, would you expect a huge increase to our GDP just because it turns out they're better than almost anyone at finding the locations of smells (implying they could be better than us at most things)? Or what if they bred cats to help catch rodents or apes to instantly memorize visual layouts? It seems absurd only because dogs have been around for a long time and we're used the idea that they can't do human jobs and being good at smelling doesn't predict other cognitive abilities. Chimpanzees are far more intelligent than any AI, but I haven't heard of them taking anyone's job yet.

The difference with AI is it is rapidly improving and we can expect it to reach human intelligence before too long, but we are clearly not there yet and benchmarks are not going to give us more than a rough idea of how close we are to it unless those benchmarks start getting a lot closer to the things we actually want AI to do.

One problem with this is it helps those who want us to conflate wokeness with basic decency.