@Glassnoser's banner p

Glassnoser


				

				

				
0 followers   follows 0 users  
joined 2022 October 30 03:04:38 UTC

				

User ID: 1765

Glassnoser


				
				
				

				
0 followers   follows 0 users   joined 2022 October 30 03:04:38 UTC

					

No bio...


					

User ID: 1765

Yes, agency. How do you know they know how to implement it?

there's many, many people who think like Ziz

I put very little weight on this. It seems obvious to me that it's just become a sort of ingroup belief that it is now trendy to have. Ten years ago, it was the opposite. Virtually everyone in AI research found the idea to be ridiculous. Within the last few years, the balance of opinion has changed without any significant relevant new information on what AI will be like.

Every estimate that is actually based on people betting real money or which weights estimates based on the predictive abilities of those providing the estimates gives a very low probability of this happening.

The answer is that he doesn't understand Chinese, he plus the room understand Chinese.

I use AI a lot at work. There is a huge difference between writing short bits of code that you can test or read over and see how it works and completing a task with a moderate level of complexity or where you need to give it more than a few rounds of feedback and corrections. I cannot get an AI to do a whole project for me. I can get it to do a small easy task where I can check its work. This is great when it's something like a very simple algorithm that I can explain in detail but it's in a language I don't know very well. It's also useful for explaining simple ideas that I'm not familiar with and would have to look up and spend a lot of time finding good sources for. It is unusable for anything much more difficult than that.

The main problem is that it is really bad at developing accurate complex abstract models for things. It's like it has memorized a million heuristics, which works great for common or simple problems, but it means it has no understanding of something abstract, with a moderate level of complexity, that is not similar to something it has seen many times before.

The other thing it is really bad at is trudging along and trying and trying to get something right that it cannot initially do. I can assign a task to a low-level employee even if he doesn't know the answer and he has a good chance of figuring it out after some time. If an AI can't get something right away, it is almost always incapable of recognizing that it's doing something wrong and employing problem solving skills to figure out a solution. It will just get stuck and start blindly trying things that are obviously dead-ends. It also needs to be continuously pointed in the right direction and if the conversation goes on too long, it keeps forgetting things that were already explained to it. If more than a few rounds of this go on, all hope of it figuring out the right solution is lost.

I'm not sure how the minutia of laws were well-known back then.

Maybe I'm misunderstanding the question, but laws are organized into books that are indexed. You look up the relevant statute, search for the right section, and then read a few paragraphs describing the law. If you need to know the details of case lawyer, you consult a lawyer. They go to law school and read relevant cases to know how judges are likely to rule on similar future cases.

You still need lawyers to do this because ctrl-f doesn't return a list of all the relevant legal principles from all the relevant cases.

There also has been a massive explosion in the number and complexity of laws since the word processor was invented.

If it is missing a crucial characteristic of human intelligence, how can you say it has AGI? I can believe it could do well on an IQ test, but given that it has a totally different distribution of abilities, the relevance of those tests for AI is very low. The predictive power of an IQ test result for AI is dramatically lower for an AI. So while it might get a 120 IQ result, it is in no way as competent as an actual person with an IQ of 120.

Once it gets agency, we will probably discover something else it is lacking that we didn't think of before. So we can't even modify the test to include agency to get something as good as an IQ test is for humans. We need to remember that it has a different skill distribution and one which we are discovering as it improves. That's why the ultimate test has to be the broadest set of possible tasks that humans do, not these narrow tests which happen to be highly predictive for human abilities.

No AI has ever passed a Turing Test. Is AI very impressive and can it do a lot of things that people used to imagine it would only be able to do once it became generally intelligent? Yes. But has anyone actually conducted a test where they were unable to distinguish between an AI and a human being? No. This never happend and therefore the Turing Test hasn't been passed.

The entire point of the Turing Test is that, rather than try to define general intelligence as the ability to do specific things that we can test for, we define it in such a way that passing it means we know the AI can do any cognitive task that a human can do, whatever that might be, without trying to guess ahead of time what that is. We don't try to guess the most difficult things for AI to do and say it has general intelligence when it can do them, or else we end up making the mistake that you and many others are making where we have AI that can do very well in coding competitions but cannot do the job of a low level programmer or it can get high marks on a test measuring Ph.D. level of knowledge of some subject, but it can't do an entry level job of someone in that field.

Humans have always been and continue to be really bad at guessing what will be easy for computers to do and what will be hard, and we're discovering that the hardest things for computers to do are not what we thought, so the Turing Test must remain defined as a test in which the computer passes if it is indistinguishable from a human being. That is not the same as sounding like a human being or doing a lot of things only humans could do until recently.

It is still trivial to distinguish an AI from a human being because it has a very distinctive writing style that it struggles to deviate from, it cannot answer a lot of very simple questions that most intelligent people can answer, and it refuses to do a lot of things of things like use racial slurs, give instructions for dangerous actions, and answer questions with politically incorrect answers.

We shouldn't be too surpised that AI can do well on these benchmarks but not lead to massive productivity increases because doing well on benchmarks isn't AGI. There aren't very many jobs that consist of completing benchmarks.

AI is still pretty dumb in some sense. The latest estimates of the number of neurons these models have that I've heard are on the order of 2 trillion. That would make it about as smart as a fox. That's smarter than a cat but dumber than a dog. If a company said they were investing in dog breeding to see if they could get them to replace humans, would you expect a huge increase to our GDP just because it turns out they're better than almost anyone at finding the locations of smells (implying they could be better than us at most things)? Or what if they bred cats to help catch rodents or apes to instantly memorize visual layouts? It seems absurd only because dogs have been around for a long time and we're used the idea that they can't do human jobs and being good at smelling doesn't predict other cognitive abilities. Chimpanzees are far more intelligent than any AI, but I haven't heard of them taking anyone's job yet.

The difference with AI is it is rapidly improving and we can expect it to reach human intelligence before too long, but we are clearly not there yet and benchmarks are not going to give us more than a rough idea of how close we are to it unless those benchmarks start getting a lot closer to the things we actually want AI to do.

One problem with this is it helps those who want us to conflate wokeness with basic decency.

I've never understood the point of cruise control. It doesn't really take any effort to maintain a constant speed.

I have a car that's almost two years old. One of the best features is the sideview mirrors can melt the ice and snow. It has a lot of really annoying safety features though which you either can't turn off or they automatically turn back on every time you restart the car. The worst is the seatbelt chime that immediately sounds when the car turns on, even if it's just the electronics. Another really bad one is the automatic emergency braking. This one is actually dangerous. It's always turning on in situations where it isn't needed.

When these questions were being posted on Twitter, I saw people derive the answers using formal logic, but I knew what they were intuitively, which is probably the ability they're trying to test for.

A few weeks ago, people were posting questions from the LSAT on Twitter which they described as especially difficult. Invariably, I found them all to be really easy. This seems to fit with something I've struggled to understand which is why do most people seem cognitively normal most of the time, but as soon as anything becomes just a little abstract, they seem utterly incapable of understanding it? It doesn't usually come up, but if you try to teach someone really basic math or if you try to point out a logical error in an argument, they suddenly lose the ability to understand the most basic and obvious things, most of which shouldn't even need to be explained. They should be intuitive. I guess basic reading comprehension and using logic just happen to be rare abilities, even among people who seem to be able to do other things that seem very cognitively demanding.

I do think that there should be some restrictions on the right to vote based on being able to get say a 170 on the LSAT because I see so many insane opinions being expressed which if acted on would lead to horrible consequences, and they're usually rooted in people not having an understanding of something really basic like how supply and demand work. I don't think education is a solution to this because it's not just that they've never studied economics. I don't think they are even capable of understanding how supply and demand work.

I have a lot of experience trying to teach people math and arguing with people over abstract ideas. There are a lot of simple logical truths which are intuitive to an intelligent person that you cannot get the average person to understand even after hours of explaining to them. Most people are just not capable of rational thought.

It doesn't make much sense to say it "as a Canadian" then. Better to say "as an Albertan" or something.

This doesn't answer my question about that makes the US dollar expensive. Printing money makes it less expensive, not more expensive.

Printing money doesn't make real estate expensive because it only affects the nominal price, not the real price.

The US has the housing prices of a country that prints wildly while having the currency cost of a country like Switzerland.

What does that have to do with the US dollar being the reserve currency?

I think Canada does care. Most Canadians have relatives who live in the US and would care that their lives would be disrupted.

I understand that, but I'm asking how they would demand that. The Canadian government doesn't control where Canadians live. I doubt our government would be allowed by the Supreme Court to pass a law preventing us from living in the US.

It cannot physically do that. Sure, any one specific thing can be made in the US, albeit at greater cost. But it would have to take resources from some other industry. It can't make everything it is making now and make everything it imports, not even if it stopped producing exports. It's not just because of the trade deficit which means it consumes and invests more than it produces, but because it would lose the gains from trade.

What are you suggesting? That Canada make it illegal to work in the US?

How does the dollar being the reserve currency make it expensive? What does that even mean? The Federal Reserve can always print more dollars to meet the demand and bring down its value and failing that, prices would simply adjust to whatever they need to be.

That's where most of the people live.

How is forcing the annexation of Canada through economic pressure supposed to work? It would require a constitutional amendment with the agreement of all ten provinces. This would never happen. He'd be lucky to get one, and he is especially not going to get Quebec, which would never want to give up its language laws. Canadians are very proud of not being American. We're the product of 250 years of selection of people who did not move to the U.S. for better weather and better economic opportunity.

How do you prove you don't have another job?

Won't this have averse selection? The best employees will be the most likely to take this offer while those who stay will be those who know they can't get easily hired elsewhere.

Doesn't this exist? I used to use something like this called Pidgin back when MSN messenger was the most popular messaging program. I haven't used it in years though. I don't know how well it still works.

Did something happen to reduce the amount of spam email that gets sent? I've been getting a flood of spam mail for almost 20 years (ever since giving my email address to McDonald's) but in the last few years, it has slowly declined to the point that I now have only received 9 spam email in the last month. Has something changed to reduce the amount of spam, or is my email address just slowly disappearing from lists of active email addresses from years of not clicking on links in spam emails?

Bail and the cost of legal counsel are not intended to be punishments for crimes.