@Corvos's banner p

Corvos


				

				

				
0 followers   follows 1 user  
joined 2022 December 11 14:35:26 UTC

				

User ID: 1977

Corvos


				
				
				

				
0 followers   follows 1 user   joined 2022 December 11 14:35:26 UTC

					

No bio...


					

User ID: 1977

Got it, thank you.

Yes, and it has to be this way because anyone providing me a necessary service must be paid less per person that I am paid. Otherwise I can’t afford those necessary services.

For example, I am dependent on food (farmers, truckers, shelf stackers etc.) to live. If those people are too well paid, I can’t afford to eat. So it seems that, most of the time, it’s a prerequisite for civilisation that people doing necessary jobs are paid less than people doing unnecessary jobs. Which is very awkward for society.

I’m specifically differentiating the two concepts. My point is that the economically effective way of allocating value does not match fundamental moral intuitions many have about how to allocate value.

The Marxists put their fingers in their ears and say ‘akshually economic value is derived from labour’ and they’re wrong and it doesn’t work. But the concept is perennially popular because it’s fundamentally intuitive, and I suspect that until we find a way to make the two match a little better we will have permanent ongoing strife. The welfare system was an attempt to do this, but has the now-clear disadvantage that it’s a pyramid scheme that encourages dependency and bankrupts you. I’m interested in exploring the space of possible alternatives.

A pure market system with high liquidity and competition generally produces good results and humans hate, hate, hate being subject to it. Like evolution, market forces are an eldritch optimising machine. There are some people who seem to feel that market forces are morally good and I think this is a category error.

Thanks to everyone who answered my hypothetical about the startup.

General opinion seems to come down to ‘you don’t owe them anything but a decent sum of money would be a gentlemanly / ladylike show of gratitude. Which seems about right to me.

One of the reasons I’m interested in the question is that much social conflict comes from the discrepancy between the market value of labour (determined primarily by the number and type of people able to do the work) and what you might call the utility value (determined by how important it is that the work gets done. For example, @PutAHelmetOn’s code saves hundreds of thousands in processing costs, farmers stop everyone dying of famine, longshoremen make it possible to have international trade (modulo automation).

I would say the primary economic conflict of the last two hundred years is that the employees think in terms of the utility of their work while customers and employers think in terms of the market value.

Trade unions and guilds have historically been used as a method of arbitrage between these two values, limiting competition to drive market value closer to utility value. And the communist states show pretty clearly to my that trying to base your society on something other than the market value causes problems. I suppose the welfare state is basically ‘we don’t owe you this money but we’re going to give some of it to you anyway’.

Would like to write an effort post but this is what I have for now.

(Meta: is it obnoxious to do multi-top-posts like this? I didn’t want to talk about these ideas right away because I felt it would bias the replies, but at the same time it seems like a waste to write this as a second level reply in an old thread just before the new CW thread opens up).

Nicely done

There is also a very good VR mod for all 3 episodes.

Surely the answer is that the life of an arbitrary stranger is not worth anywhere near 10m. If you tout for charities based on cost per lives saved, as EA does, you find that the majority of people are not willing to spend $1 to save a life, much less $10,000,000. That figure is a political fiction designed to reflect voter’s estimation of their own (or a loved one’s) life so as not to make the government unpopular.

Riffing on this discussion I would like to present a scenario:

I am the CEO of a struggling startup, expecting to take a call from a very busy potential client. We are out of funds and will go bankrupt without the client’s business. The client, a CEO, is busy and if I miss this call he will certainly not bother to call back.

Unfortunately my phone has died at the crucial moment. I’m in a cafe so I run up and down the tables, begging to borrow someone’s charger. Somebody gives me the charger, I take the call, and my startup goes on to make billions. The call, and therefore the charger, has made me rich beyond imagining.

One the one hand, lending me the charger was an utterly trivial act: even ten dollars in thanks would be a little windfall for the lender. On the other hand, without the lender I would be destitute instead of a billionaire. How much of a debt do I owe the person who lent me their charger?

Edit: ‘owe’ in a moral sense, as opposed to enforceable by a court.

Thank you to both you and @hydroacetylene for explaining the situation. The judgement makes much more sense now.

When you say she was key, do you mean she was significantly involved in the leadership or funding of Amazon, or do you mean in terms of general love and support?

The latter is generally underrated, but I doubt it was necessary or sufficient for Bezos to found Amazon.

Not knowing more, I would be fine with the wife getting ‘live in reasonable comfort for the rest of your life’ money and I would be fine stoning Bezos for adultery, but I don’t see that divorce qualifies her for ownership of his fortune.

Oh, I’m sure the backstage can be pretty grim, I’ve heard bad things about VCs.

Regardless, it's strange to raise 100s of millions without a working prototype for a single valuable feature for a firm that existed for close to a decade.

They had a prototype that seemed to work (with faked results) and legitimately did work for a few tests. Holmes’ genius was getting stuffy septuagenarians into such a bidding war that they overruled their own analysts who said the prototypes were insufficient and urged caution.

Were there shenanigans behind that? I don’t know. Possibly. But I think it would have come up in the investigation along with all the other criminal stuff that was going on. Did you read Bad Blood? It’s a fantastic book.

Quite possibly! It’s actually a quote from Scott’s web novel ‘Unsong’, which I recommend if you haven’t read it. It’s a bit clever-clever in places but pretty good and genuinely intelligent for the most part.

Sounds nice but in practice it doesn’t produce good results:

  1. Reversion to the mean - just as geniuses tend not to produce genius children, the disposition of your cohort is a better predictor of your lineage’s behaviour than your personal values. Especially when that cohort forms ethnic enclaves on arrival.
  2. Passing a civics test != sharing your values. Trying to prevent an ethical project from being infiltrated by people who make the right mouth noises is an ancient problem faced by religions, charities, and NGOs, and it’s almost unsolvable. The two most reliable ways are requiring personal recommendations for membership, or limiting it to a specific ancestral group like the Hasidics or the Amish. I assume that neither appeal to you.

Seriously, I’m not trying to gotcha you with clever arguments. One of the reasons I moved towards an ancestral-based understanding of Britishness was watching all the immigrants who’d taken the mandatory civics test on ‘British values’ turn around and condemn those values the moment they got their visa. We wanted skilled immigrants who would uphold our values too, who doesn’t? But in general that’s not what we got, and the children are worse.

The thing is, you assume that 'ideals-based identity' and 'ancestral identity' are separate and orthogonal to one another. But even if we put aside tribal allegiance, it's pretty clear that emotional predispositions (openness, authoritarianism, neuroticism, etc.) are at least partially genetic. And this is going to correlate somewhat with race, because most places have had fairly stable demographics for hundreds or thousands of years.

The ideal of "free speech" is going to look very different in a country of high-openness, high-extroversion people vs high-neuroticism, low-openness. Likewise "self-governance". Moved from one country that considers itself meritocratic, self-governing and devoted to free speech to a very ethnically-different country with the same ideals really drove that home for me.

American notions of what their founding ideals mean has already shifted pretty clearly since the country was founded, and I doubt that's independent of the demographic changes that America has been through since the founding. Anyone who wants to preserve modern American values has to consider the demographics of the population upholding those values and passing them down to their children.

(Look at how much work it took for Roosevelt et al to get federal jobs allocated by exam scores not patronage. Both factions considered themselves thoroughly American, but one defined 'merit' as 'decades of loyal service' and the other as 'intelligence and diligence").

In the case of Elizabeth Holmes, I think people were desperate for a female Steve Jobs. Don’t forget she never got significant amounts of VC money; she got the funds from blinding politicians and supermarket CEOs with science.

This is not a coincidence, because nothing is ever a coincidence :)

I heard that Muslims consider Jesus a genuine prophet and messenger, just not the Son of God. (Whereas Jews believe he was a maniac). So Christians are directionally correct, they just need to learn how important Mohammed is.

Speculation: It’s interesting that the bottleneck is given as lack of data rather than architecture. That opens up the possibility that we may be able to get things moving again by finding some other method of obtaining/creating useable data.

LLMs were historically created to use next-token-prediction as a means of solving natural language processing tasks. I think we can regard that problem as provisionally solved. When people talk about GPTs limits, they aren’t talking about its ability to take English input and produce readable English output. They are talking about general intelligence: the ability to output sensible, useful English output.

In short, LLMs are general learning machines using natural language as a proxy task. Natural language is cheap and information rich but any means of conveying information about the world is fair game, provided that it can be converted into the same token space that GPT is using using CLIP or something similar.

What is needed is large quantities of data that conveys causal information about the world. Video is probably a good place to start. Some kind of simulated self-play might also be useable. What else could be useable?

(I’m not sure how next-token prediction would work here)

Forgive me, I didn’t mean to imply that drinking bottled water was stupid in literally all circumstances, only that it would be strange to but it when properly treated tap water is a viable alternative.

I would normally refill a flask or old bottle from the tap if I wanted portable water, but I drink bottled water in countries where the taps aren’t safe or if I’m suddenly thirsty when out and about.

I was always taught that drinking bottled water is a stupid waste of money.

Only rumours, sorry.

Indians, quite frequently. The problem is partly the legacy of colonialism and the heavy-handed way that the Indian government has stirred up anti-British resentment to escape responsibility for India’s relative lack of development. It’s also that western societies don’t really recognise or care about caste, so very high ranking Indians move to the UK expecting to be treated like the native upper class and don’t receive that.

East Asians, I don’t know. We didn’t have many in the UK until recently and my experience is all with first generation people or dealing with them in their own country; the ones I knew were pro-white if anything.

It’s not the same thing. Tay was set to continue learning after deployment and trolls figured out how to bias her input data and make her into a nazi.

Google, like Meta, releases models in a frozen state and has very stringent evaluation criteria for releasing a model. The input data is evaluated to give percentages by country of origin (less developed, more developed) and the race and gender of subjects. Look at the Bias section of something like the SAM or DINO paper. Quite possibly the original error crept in due to individually minor decisions by individual contributors*, but there is no way people on the team didn’t realise there was a problem in late production. Either they felt that the AI’s behaviour was acceptable, or they didn’t feel able to raise concerns about it. Neither of those say good things about the work culture at Google.

*I believe this is called ‘systematic racism’ by the in crowd.

To quote Jim Hacker, Prime Minister on an overbearing Foreign Office:

"Are they here to follow our instructions, or are we here to follow theirs?"

The FO makes some good points: politicians tend to be geographically ignorant, prone to black-and-white thinking, and have short time horizons. Nevertheless, the bureaucrats are also blinkered, prejudiced, incompetent and leverage their expertise to block off any feedback or reform that might make them otherwise. Ultimately that can't be tolerated.

https://youtube.com/watch?v=fVVX0lHZ8JE

In case you didn’t follow at the time, Gemini would literally refuse image creation requests that showed white people in a positive light, whilst simultaneously erasing them from historical images.

To achieve that level of effect, you have to have a VERY skewed training set, and follow it up with explicit instructions in the prompt.

The fact that they trained an AI like this, and that not one of the testing team felt comfortable saying that the model was obviously biased against depicting white people in positive or historical contents, shows pretty explicit racism in my view. Every BigCorp AI paper has a Bias section now, they definitely knew. And other BigCorp LLMs and image generators have avoided this kind of problem without legal liability. Pretty clearly nobody at Google was interested or comfortable bringing up concerns about the depiction of white people, and white people only.

https://old.reddit.com/r/ArtificialInteligence/comments/1awis1r/google_gemini_aiimage_generator_refuses_to/