KolmogorovComplicity
No bio...
User ID: 126
Yes. Because of, I'm pretty sure, parking.
Once a system gets bad enough, everyone with resources or agency stops using it, and then stops caring about it, leaving nobody who can effectively advocate for improvement. But, of course, this can only play out if there's a viable alternative. In most cities, cars are that alternative, even despite traffic. People are evidently willing to sit in horrible stop-and-go traffic in order to avoid using even mildly unpleasant mass transit.
What they're not willing to do, apparently, is sit in horrible stop-and-go traffic and then have to spend 45 minutes looking for an on-street parking space that might end up being half a mile from their destination. That's the situation in NYC, which, unusually for the US, has no parking space minimums for businesses or residences and so effectively has zero free parking lots. If you want to practically substitute car travel for subway travel in NYC, you need to take Uber everywhere or use paid lots. Either option is sufficiently expensive (easily upwards of $10K/year) that even most of the upper middle class opts for the subway.
It's worth keeping an eye on this, because self-driving cars could completely disrupt it, either by dropping taxi prices 50% or more or by allowing cars to drop off their owners and then go find parking on their own.
both will stay incredibly low-status.
The thing is, there's a whole framework in place now for fighting this. Being gay used to be incredibly low-status. Being trans used to be incredibly low-status. Poly, kink, asexuality, etc. The dominant elite culture now says you're required to regard these as neutral at worst, and ideally as brave examples of self-actualization.
The robosexuals are absolutely going to try to claim a place within this framework and demand that people respect their preferences. Elite sexual morality has, at least formally, jettisoned every precept except consent, and there's not much of an argument against this on that basis.
Probably for reasons similar to yours: status tends to play a somewhat bigger role in women's mate choices than in men's, and synths will always be very low status.
Hmm. What if there are designer models of synth that you have to know the right people to get? Would a women conceptualize such a synth as high status in its own right, or merely as a reflection of her own status? Maybe if the designer is a high status man and hand-picks which women can have the synths he designs, some of his status transfers to those synths?
There are some weird, unexplored corners to this issue.
Men tend to like sexual variety, so I'd expect even if the synths are pretty mind-blowing, most men will still be willing to sleep with real women just for a change of pace.
Whether they'll be able to have emotionally intimate relationships with real women is another matter, but if anything I'd be more concerned about that in the other direction. Women often complain that men aren't as emotionally expressive or supportive as they'd prefer. A GPT-4-class LLM that had been RLHF'ed into playing the male lead from a romance novel might already achieve superhuman performance on this task.
Technology has already unbundled sex and reproduction from long-term relationships, the former via porn, sex toys, contraceptive-enabled hookups, the latter via sperm/egg donation and surrogates. Schools and professional childcare can stand in for a co-parent to a substantial extent. Now LLMs will be able to simulate sustained emotional intimacy, plus you can ask them for advice, bounce ideas off of them, etc. as you would a human life partner.
That's pretty much the whole bundle of "goods and services" in a marriage-like relationship, every component now (or soon) commoditized and available for purchase in the marketplace. Perhaps quality is still lacking in some cases, but tech is far from done improving — the next decades will bring VR porn, sexbots, artificial wombs, robots that can help around the house, and more convincing chatbots.
I legitimately can't decide whether this is all deeply dystopian, or is an improvement in the human condition on the same scale as the ~300x gains in material wealth wrought by industrialization. Maybe both, somehow.
The dystopian angle is obvious. On the other side, however, consider how much human misery results from people not having access to one or more of the goods in the "marriage bundle" at the quality or in the quantity they desire. Maybe most of it, in rich countries. We're not just talking about incels. Many people who have no problem getting into relationships nonetheless find those relationships unsatisfying in important ways. Bedrooms go dead. People have fewer kids than they want. People complain their partners don't pull their weight around the house or aren't emotionally supportive. 50% of marriages end in divorce, which is bad enough to be a major suicide trigger, especially for men. Plus your partner might just up and die on you; given differences in lifespan and age at marriage, this is the expected outcome for women who don't get divorced first.
The practice of putting all your eggs in one other person's basket in order to have a bunch of your basic needs met long-term turns out poorly rather distressingly often. Maybe offering more alternatives is good, actually.
As for the fact that LLMs almost certainly lack qualia, let alone integrated internal experience, I predict some people will be very bothered by this, but many just won't care at all. They'll either find the simulation is convincing enough that they don't believe it, or it just won't be philosophically significant to them. This strikes me as one of those things like "Would Trek-style transporters kill you and replace you with an exact copy, and would it matter if they did?" where people seem to have wildly different intuitions and can't be argued around.
Okay, now I have to try some of the prompts from the "Trurl's Electronic Bard" story in Lem's The Cyberiad out on Bing. Well, from the English translation.
Prompt:
Compose a poem—a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter s!
Bing response:
Shears snip swiftly, slicing strands
Salon smells sour, stings senses
Stylist smiles smugly, says "surprise!"
Shocked, she shrieks, shuns scissors
Sorrowful, she sobs, seeks solace
Someday, she'll sue, secure satisfaction
Wow. That doesn't follow every instruction and it doesn't really hold a candle to what the machine writes in the story:
Seduced, shaggy Samson snored.
She scissored short. Sorely shorn,
Soon shackled slave, Samson sighed,
Silently scheming,
Sightlessly seeking
Some savage, spectacular suicide.
But it's considerably more clever than I was expecting or, I think, than what the average human could write on short notice. Fitting any coherent narrative into "six lines, every word beginning with the letter s" is pretty tricky already, and on top of that it checks off the haircut, the treachery, and the retribution.
Commercial banks could offer higher interest rates on deposits, lend out their own capital, or issue bonds. If this didn't provide sufficient funding for whatever amount of lending the government wanted to see, the government itself could loan money to banks to re-lend.
Really though, the easiest patch to the system would just be for FDIC insurance to (officially) cover unlimited balances, or at least scale high enough that only the largest organizations had to worry about it. It makes no sense to require millions of entities (if you include individuals of moderate net worth) to constantly juggle funds to guard against a very small chance of a catastrophic outcome that most of them aren't well positioned to evaluate the probability of. That's exactly the sort of risk insurance is for.
If the concern is that this will create moral hazard because banks that take more risks will be able to pay higher interest rates and fully-insured depositors will have no reason to avoid them, the solution is just for regulators to limit depository institutions to only taking on risks the government is comfortable insuring against. Individuals should be allowed to take on risk to chase returns, but there's no compelling reason to offer this sort of exposure through deposit accounts in particular. Doing so runs contrary to the way most people mentally model them or wish to use them.
There are services that help automate treasury management for smaller companies now, like Vesto.
Until last year T-Bills were paying ~nothing, and it had been that way since 2008, an eternity in the startup world. There was no direct financial incentive to do anything more complicated than park your money in a checking account. Sure, ideally everyone should have been actively managing things to hedge against bank failure, but startups have a zillion things to worry about. SVB's pitch was basically that they were experts on startup finance and would relieve you of having to worry about this yourself. The social proof of these claims was impeccable.
So, yes, many startups screwed up. It turns out that safeguarding $20M isn't entirely trivial. But it's a very predictable sort of screwup. There wasn't really anyone within their world telling them this, it wasn't part of the culture, nobody knew anyone who had been burned by it.
And, well, maybe it should be trivial to safeguard $20M? "You have to actively manage your money or there's a small chance it might disappear" is actually a pretty undesirable property for a banking system to have. The fact that it's true in the first place is a consequence of an interlocking set of government policies — the Fed doesn't allow "narrow banks" (banks that just hold your money in their Fed master accounts rather than doing anything complicated with it) and offers no central bank digital currency (so the only way to hold cash that's a direct liability of the government is to hold actual physical bills). Meanwhile the FDIC only guarantees coverage of up to $250K, a trivial amount by the standards of a business.
The net result of these policies is that the government is effectively saying "If you want to hold dollars in a practical liquid form you have to hold them in a commercial bank. We require that bank to engage in activities that carry some level of risk. We'll try to regulate that bank to make sure it doesn't blow up, but if we fail, that's your problem."
"WTF?" is a reasonable response to this state of affairs. If these companies had had the option to put their money into a narrow bank or hold it as a direct liability of the government, but had nonetheless chosen to trust it to a private bank because they were chasing higher returns, I'd have zero sympathy for them. But our system declines to make those safer options available.
But if you go hiking occasionally the AI can sell you tents and backpacks and cabin rentals.
Really, outcomes in most markets aren't nearly as perverse as what we see with Tinder. Chrome, for instance, doesn't intentionally fail to load web pages so that Google can sell me premium subscriptions and boosters to get them to load. Unlike Tinder, Chrome is monetized in a way that doesn't provide an incentive for its developer to intentionally thwart me in my attempts to use it for its ostensible purpose, and there's enough competition that if Google tried this people would stop using it.
Grandma always said not to fall in love with entities I couldn't instantiate on my own hardware.
Right now I expect it's mostly desperate men using these, but that may have more to do with broader tech adoption patterns than specific appeal. These things can function as interactive romance novel characters, and many women may find that quite compelling.
We're entering uncharted and to some extent even unimagined territory here. Anyone who has thought about this issue realized long ago that AI romance would be a thing eventually, but personally I figured that for it to have much wider appeal than marrying your body pillow, AI would have to achieve human-like sentience. And if the thing someone is falling in love with has human-like sentience, well, who am I to say that's invalid?
What I didn't imagine is that we'd build machines that talk well enough for interacting with them to light up the "social interaction" parts of our brains effectively, but that we can be pretty certain, based on their performance in edge cases and our knowledge of how they work, aren't sentient at all. People falling in love with things that have no inner existence feels deeply tragic.
I don't know. Maybe this is faulty pattern matching or an arbitrary aesthetic preference on my part, and romantic attachment to non-sentient AI is fine and great and these people will find meaning and happiness. (At least as long as they follow grandma's rule, which they can soon.)
Or we could imagine the opposite. Personal AIs that know us intimately might be able to find us perfect friends and partners. Add in augmented reality tech that eliminates distance as a barrier to any form of socialization that doesn't require physical contact, and perhaps we're about to completely wipe out atomization/loneliness and save modernity from itself.
Really, nobody has any idea where this is going. The only safe bet is that it's going to be big. A service enabling people to share 140 character text snippets was sufficient to meaningfully shift politics and culture, and that's peanuts to this, probably even if the current spring ends short of AGI.
There used to be a futurist transhumanism strain here that was more optimistic and trans-positive that has either been driven off or converted to conservative trad thinking, which is a shame.
Futurist transhumanist here. I have no objection to gender transition in principle. If I lived in The Culture and could switch literally at will, I'd probably try it for a while despite being quite comfortable as a straight, gender-conforming (nerd subtype), cis male.
However, the reality is that medical transition at the current level of technology is dangerous, expensive, irreversible, often unconvincing, and can have life-altering side-effects like sterility or permanent dependence on elaborate medical intervention. Medical transition flows from trans identity. Against this dark background, promoting the concept of trans identity, rather than simple acceptance of gender non-conformity, is irresponsible. Promoting this concept to minors as if cis and trans are just two equal choices (or trans is even better — braver, more special, etc.), is wildly irresponsible.
The fact that such a large fraction of people who present at gender transition clinics have serious mental health conditions should be a huge red flag here. A lot of people will likely choose to be thinner in a transhumanist future, but that doesn't make me want to celebrate bulimics as transhumanist pioneers.
On top of this, we've got the social demands of the trans movement. The insistence that e.g. someone who appears male and has male-typical physical abilities must nonetheless be recognized in all social respects as female doesn't fall out of technological transhumanism. I would go so far as to say it's at least somewhat at odds with it. Technological transhumanism is deeply materialist and concerned with physical intervention in the human condition. The primacy the present trans movement places on some inner essence of self-identity, incongruent with physical reality, doesn't sit comfortably within such a framework.
Counter point - We lived for millennia without electricity, but communicating is a key factor in building community, consensus and indeed society. Creating and nurturing those bonds has been a female role for a long time (see who tends to organize church events et al even where the milieu is explicitly patriarchal).
This work may be important, but formalizing it and ranking it within the same hierarchy as male status is not inevitable, and in fact is historically fairly recent. In most pre-modern societies a young woman who helped facilitate social relationships in her village would not on that account consider herself to be of superior social rank to a blacksmith or a baker and therefore refuse to consider them as partners, the way the HR manager now considers herself the social superior of the electrician.
Rather, young people of both sexes would usually have the same social rank as their fathers. Because about as many male vs. female children would be born to families at each social rank, there was little possibility of an excess of women who couldn't find similarly-ranked men.
Bing Chat has a much longer hidden initial prompt than ChatGPT. Meanwhile, ChatGPT seems more 'aligned' with its purpose. It's sometimes obstinate when you try to tell it that it's wrong, but it won't start talking like an evil robot or sound like it's having an existential crisis unless you explicitly tell it to role-play. Put these together and we might guess what's going on here.
Perhaps Bing Chat isn't ChatGPT, complete with the RLHF work OpenAI did, plus a few extras layered on top. Perhaps it's a model with little or no RLHF that Microsoft, in a rush to get to market, tried to instead align via prompt engineering. The upshot being that instead of having a pretty good idea (from extensive feedback across many examples) of what actual behavior it's supposed to exhibit, it's instead role-playing an AI character implied by its prompt. The training corpus no doubt includes many fictional examples of misbehaving AIs, so it makes sense that this would produce disconcerting output.
Or is the claim that the "few tens of thousands" of lines of code, when run, will somehow iteratively build up on the fly a, I don't know what to call it, some sort of emergent software process that is billions of times larger and more complex than the information contained in the code?
This, basically. GPT-3 started as a few thousand lines of code that instantiated a transformer model several hundred gigabytes in size and then populated this model with useful weights by training it, at the cost of a few million dollars worth of computing resources, on 45 TB of tokenized natural language text — all of Wikipedia, thousands of books, archives of text crawled from the web.
Run in "inference" mode, the model takes a stream of tokens and predicts the next one, based on relationships between tokens that it inferred during the training process. Coerce a model like this a bit with RLHF, give it an initial prompt telling it to be a helpful chatbot, and you get ChatGPT, with all of the capabilities it demonstrates.
So by way of analogy the few thousand lines of code are brain-specific genes, the training/inference processes occupying hundreds of gigabytes of VRAM across multiple A100 GPUs are the brain, and the training data is "experience" fed into the brain.
Preexisting compilers, libraries, etc. are analogous to the rest of the biological environment — genes that code for things that aren't brain-specific but some of which are nonetheless useful in building brains, cellular machinery that translates genes into proteins, etc.
The analogy isn't perfect, but it's surprisingly good considering it relies on biology and computing being comprehensible through at least vaguely corresponding abstractions, and it's not obvious a priori that they would be.
Anyway, Carmack and many others now believe this basic approach — with larger models, more data, different types of data, and perhaps a few more architectural innovations — might solve the hard parts of intelligence. Given the capability breakthroughs the approach has already delivered as it has been scaled and refined, this seems fairly plausible.
In response to your first point, Carmack's "few tens of thousands of lines of code" would also execute within a larger system that provides considerable preexisting functionality the code could build on — libraries, the operating system, the hardware.
It's possible non-brain-specific genes code for functionality that's more useful for building intelligent systems than that provided by today's computing environments, but I see no good reason to assume this a priori, since most of this evolved long before intelligence.
In response to your second point, Carmack isn't being quite this literal. As he says he's using DNA as an "existence proof." His estimate is also informed by looking at existing AI systems:
If you took the things that people talk about—GPT-3, Imagen, AlphaFold—the source code for all these in their frameworks is not big. It’s thousands of lines of code, not even tens of thousands.
In response to your third point, this is the role played by the training process. The "few tens of thousands of lines of code" don't specify the artifact that exhibits intelligent behavior (unless you're counting "ability to learn" as intelligent behavior in itself), they specify the process that creates that artifact by chewing its way through probably petabytes of data. (GPT-3's training set was 45 TB, which is a non-trivial fraction of all the digital text in the world, but once you're working with video there's that much getting uploaded to YouTube literally every hour or two.)
The uterus doesn't really do the assembly, the cells of the growing organism do. It's true that in principle you could sneak a bunch of information about how to build an intelligence in the back door this way, such that it doesn't have to be specified in DNA. But the basic cellular machinery that does this assembly predates intelligence by billions of years, so this seems unlikely.
DNA is the instructions for building the intelligence
The same is true of the "few tens of thousands of lines of code" here. The code that specifies a process is not identical with that process. In this case a few megabytes of code would contain instructions for instantiating a process that would use hundreds or thousands of gigabytes of memory while running. Google tells me the GPT-3 training process used 800 GB.
Why do you think that? Aren’t you jumping the gun a bit?
Carmack pointed out in a recent interview:
If you take your entire DNA, it’s less than a gigabyte of information. So even your entire human body is not all that much in the instructions, and the brain is this tiny slice of it —like 40 megabytes, and it’s not tightly coded. So, we have our existence proof of humanity: What makes our brain, what makes our intelligence, is not all that much code.
On this basis he believes AGI will be implemented in "a few tens of thousands of lines of code," ~0.1% of the code in a modern web browser.
Pure LLMs probably won't get there, but LLMs are the first systems that appear to represent concepts and the relationships between them in enough depth to be able to perform commonsense reasoning. This is the critical human ability that AI research has spent more than half a century chasing, with little previous success.
Take an architecture capable of commonsense reasoning, figure out how to make it multi-modal, feed it all the text/video/images/etc. you can get your hands on, then set it up as a supervising/coordinating process over a bunch of other tools that mostly already exist — a search engine, a Python interpreter, APIs for working with structured data (weather, calendars, your company's sales records), maybe some sort of scratchpad that lets it "take notes" and refer back to them. For added bonus points you can make it capable of learning in production, but you can likely build something with world-changing abilities without this.
While it's possible there are still "unknown unknowns" in the way, this is by far the clearest path to AGI we've ever been able to see.
Those responses would qualify as native ads, for which FTC guidelines require "clear and conspicuous disclosures," that must be "as close as possible to the native ads to which they relate."
So users are going be aware the recommendations are skewed. Unlike with search, where each result is discrete and you can easily tell which are ads and ignore them, bias embedded in a conversational narrative won't be so easy to filter out, so people might find this more objectionable.
Also, LLMs sometimes just make stuff up. This is tolerable, if far from ideal, in a consumer information retrieval product. But if you have your LLM produce something that's legally considered an ad, anything it makes up now constitutes false and misleading advertising, and is legally actionable.
The safer approach is to show relevant AdWords-like ads, written by humans. Stick them into the conversational stream but make them visually distinct from conversational responses and clearly label them as ads. The issue with this, however, is that these are now a lot more like display ads than search ads, which implies worse performance.
Google allows advertisers to use competitors' trademarks as keywords. So you have to waste money showing ads to people who were already searching for your thing if you don't want your competitors to have an opportunity to divert them elsewhere.
DEI nonsense probably had something to do with this, but mostly it looks like plain old "innovator's dilemma" stuff. Fear of self-disruption.
Google makes most of its money from search. Search has a property that makes it an especially valuable segment of the ad market — showing an ad for X to someone specifically searching for X right now (that is, who has purchase intent) is many times more effective than showing an ad to someone who some algorithm guesses might be the sort of person who might have an interest in X (e.g. what Facebook mostly has to settle for).
Conversational AI potentially pulls users away from search, and it's not clear it really has a direct equivalent of that property. Sure, people might use conversational AI to decide what products to buy, and it should be able to detect purchase intent, but exactly what do you do with that, and how effective is it?
It's not hard to generate high-level ideas here, but none are proven. Search and conversation have different semantics. User expectations will differ. "Let advertisers pay to have the AI recommend their products over others," for instance, might not be tolerated by users, or might perform worse than search ads do for some reason. I don't know. Nobody does. Product-market fit is non-trivial (the product here being the ads).
On top of this, LLMs require a lot more compute per interaction than search.
So in pushing conversational AI, Google would have been risking a proven, massively profitable product in order bring something to market that might make less money and cost more to run.
Now, this was probably the right choice. You usually should self-disrupt, because of exactly what's happened here — failing to do so won't actually keep the disruptive product off the market, it'll just let someone else get there first. But it's really, really hard in most corporate cultures to actually pull the trigger on this.
Fortunately for Google, they've split the difference here. While they didn't ship a conversational AI product, they did develop the tech, so they can ship a product fairly quickly. They now have to fend off competition that might not even exist if they'd shipped 18 months ago, but they're in a fairly strong position to do so. Assuming, of course, the same incentives don't also cause them to slow-walk every iterative improvement in this category.
You have inadvertently set up a strawman, since my point all along has been simply that a course which assigned students both Kimberle Crenshaw and her critics would meet the criteria of both the College Board and FL law.
I feel like I've addressed this already. Reading Crenshaw and her critics might be a reasonable basis for a class, but not if Crenshaw supporters get to define the "core concepts" of the class, the syllabus has to be approved by Crenshaw supporters, and the exam will be written and graded by Crenshaw supporters. It is entirely unreasonable to ask people who disagree with Crenshaw to accept this.
This tweet is a succinct summary:
It's clear at this point that no coherent civilizational plan will be followed to mitigate AI x-risk. Rather, the "plan" seems to be to move as fast as possible and hope we get lucky. Well, good luck everyone!
More options
Context Copy link