@RedRegard's banner p

RedRegard


				

				

				
0 followers   follows 0 users  
joined 2022 November 09 21:32:36 UTC

				

User ID: 1832

RedRegard


				
				
				

				
0 followers   follows 0 users   joined 2022 November 09 21:32:36 UTC

					

No bio...


					

User ID: 1832

For the brutal economy thing, I think that's a result of increasing technological sophistication requiring increasingly demanding skill levels from human workers. The economy bifurcates in two directions as the middle is eaten up through automation: roles where not much beyond warm bodies is required, say as in retail, and then roles where highly advanced technical abilities are required for filling in gaps left by automation.

This is a simplification, but the general trend is for automation to eat up moderately skill-dependent occupations. Computers ate into traditional office work but created more sophisticated tasks involving coding, but now the entry-to-mid level coding is being threatened by AI.

If an area is less technically demanding, it is more amenable to automation, generally.

Areas where this pattern doesn't follow are misleading: while ambulating around as a plumber, say, doesn't seem highly skill-dependent, ambulation is nevertheless a skillset very difficult for machines. There are incongruencies between human capabilities and those of machines which don't map cleanly to the pattern I have outlined.

I think one reason you're having trouble finding work is that there's been a major oversupply of white collar degreed workers for what the economy actually requires. Those sorts of jobs are very cushy and high status, but too many people have been going to college trying to get them, and now we're seeing an overshooting of the demand. Probably more tradespeople are required instead, but owing to the bifurcation effect I outlined, those don't pay as well as the absolute top-level knowledge occupations and are a lot more taxing, so everyone's trying to force their way through a narrow funnel to the top instead.

I must differ here as I do not see evidence (in domains I'm able to judge) of AI employing techniques and theory in its tasks. Ask it to mimic Stephen King and then compare the output to actual Stephen King. You'll understand what I mean.

I cannot speak to math here as I lack competency in that. But from what I hear from coders, its similar in that domain as well: AI can expurgate volumes of legible code, but it cannot utilize structure.

Humans have techniques and theories which inform their decisions high and low as they layer things together using judgement, intuition, etc., while AIs appear to generate text using probabilistic hacks. AI appears to be able to recreate low-complexity patterns from its dataset. I disagree that these processes are related except at a very basic level.

We formulated our understandings of the world and our interactions with it into techniques and theories, and when we build stuff we do so by employing those techniques and theories from a standpoint of engineering and design. LLMs are merely next word generators. They can recall many of the things in their databases and expurgate them to us, but their outputs aren't the products of strategically employed techniques and theories. This is inherently limiting for the complexity of the outputs they can give us.

To be fair his numbers declined with each successive election and he never managed to win a majority government after the first. He also lost the popular vote both times after the first. I think he stayed around for so long because of Canadian ambivalence. It's apples to oranges when comparing Canadian politics to American.

Military investment can also be a boon to the mass populace. My understanding of WW2 spending is that it helped to bid up wages, by creating an enormous demand for labor, while simultaneously denying growth potential to the capitalist class by forcing them to forego commercial developments in favor of facilitating the war effort. The result was the most egalitarian period in American history following the war, where the gulf between the mass populace and the capitalist class had been reduced to almost nothing. Ensuing this was an explosion in the creative arts, high taxes on the rich (a symbol of their reduced power), and a period of social calm, arguably broken in part once inequality began creeping upwards again.

Although, Trump's budgeting probably will not accomplish this, and the conditions necessary for the post-war boon were probably unique to that time in history. His war in Iran is expected to cost trillions of dollars in the long-run, though, so perhaps we can infer that is the true reason for the spending and the one objective it will accomplish, counterbalancing an enforced burden.

A method I've seen is for someone to copy the transcript of an existing video, feed it into an AI and ask it to make arbitrary changes, feed the outputted script into an AI voice generator, then use AI + third worlders on Fiverr to stitch together visuals to go with it, and voila, a complete video with minimal effort.

There's also a trend of using AI actors or clones. Essentially, since so many videos are just people talking into cameras with minimal movement, an AI generated actor is totally serviceable. It's AI script + AI voice, exposited by an AI person.

Now the question is, is AI mimicking people or were people already mimicking AI?

It’s all downstream of the choices YouTube makes. YouTube wants to show you videos lengthy enough for ads, so they create incentives both monetary and exposure based for creators to make them, and then adjust their algorithm in order to show them to you. YouTube controls it all and the content creators are merely their puppets. YouTube has a monopoly over this sort of thing and that is how they get away with it. The monopoly is more or less inherent to how these digital platforms operate, with market forces encouraging centralization of user bases. So really it’s digitized markets to blame for all of this, YouTube’s just the beast it operates through.

You're making the mistake of thinking it operates as a human does. Humans are constantly forming models of the world and using those models to inform their judgements and actions. While LLMs potentially develop models during their training, their prompt outputs are based on probabilistic likelihood calculations. 'The code being bad' is one likelihood which might emerge for it to disjointedly expand on, but there are many others. It's more like it's exploring probability space while hugging the median than actually contemplating your question; the calculations it runs through are instantaneous.

*A note on its calculations: the probabilities themselves pertain to the text being outputted and not necessarily the underlying concepts, so if it says something about 'the code being bad', that might only indicate calculations pertaining to the very words these ideas are expressed in rather than the underlying ideas themselves. LLM might not have, through its training or anything else, an approximate understanding of what code or 'bad' even are, but instead merely highly elaborate algorithms linking them and other words and word assemblages together.

So since its operating primarily or wholly on a linguistic level, it is impossible to get it to divorce its output from your starting prompt, which sets off the whole probabilistic determinacy cycle.

To the extent that America's foreign policy was subject to democratic influence, I think it did lean towards a rules-based order to a greater extent than any other empires or hegemons have historically done. Vietnam as the crowning example - taking a geopolitical loss in order to stand by popular principles and appease the masses. The problem is that the people only take an active interest in foreign affairs from time to time, and quite a lot can be done clandestinely through the CIA or whatever. This gives the state department a lot of room in pursuing an agenda that's might-makes-right under the hood while preserving an outward appearance of civility.

But the very need to disguise their actions imposes some limitations, so even that can be considered a win for creating a more idealistic world.

I'm guessing you have an entirely different view of novels than me, but as aesthetic works I can't see how extreme care in the details isn't essential to the form. Like, if you're just skimming through The Drowned World by Ballard and not subvocalizing the prose or catching all the nuances and fine, structural meanings, then I don't see how you're getting anything like a full appreciation of the story, or even really a partial appreciation. But you think AI can write to that caliber?

And even more confusing is that you think AI can do fine at art but fails at business communique, which, though still demanding, is nevertheless much cruder and more template-driven?

I switch back and forth depending on context. If I'm wanting to extract info and nothing else, I'll skim with minimal subvocalization. Generally I'll partly subvocalize but at a fast, syncopated clip. When I encounter good writing, I give myself the time to taste if fully. When I read over my own writing, I'm very attentive to rhythm.

Even if we're discounting rhythm in AI prose, though, there are many other reasons it's bad. There's a lack of structure at any level, other than randomly inserted lists and stuff, and it's fraught with all sorts of repetitions and other inefficiencies. It blurs meanings, inserts arbitrary detail, hallucinates, forgets stuff, etc. Much of this is difficult to be seen at a paragraph level. It's the kind of thing that builds on itself, until you're left with a tottering spire of slop.

I think one of the main things that makes AI output unreadable for some but not others is how attentive to detail they are. If they don't really care about the overall quality of prose, or say an artwork or anything else, and they don't want to examine it minutely for how the form feeds into substance, for its minute intricacies, then they won't see what AI output is missing.

You can add that they are literal slavers who fund ethnic cleansing terrorists in Sudan and betray fellow Muslims by allying with imperial outsiders and extract much of the region's natural endowment of wealth through resource and geography rents while spending it on sybaritic pleasures, and yeah, they basically exist to cater to the globalist rich who don't think they're cool or suave enough for any other tax haven with greater personality and style to it and would prefer to construct their personal images around chintzy opulence and unreproductive sex with Russia prostitutes, or at least that's the view of the place I receive.

Typically those sorts of praetorian-style elite forces select on the basis of loyalty or ideological commitment or some such rather than ability, and are more intended to reinforce the regime than excel at special military missions. However, since they do get the lion's share of available resources as you say, their training is usually above the standard that their countries are able to offer. So I'd expect them to perform better than the shitty conscripts that make up large parts of these 3rd world armies but poorly compared to special forces that are organized strongly for competence like the Navy SEALS.

That's not saying they are nothing: probably we can expect them to at least maintain cohesion in the face of American-Israeli bombings, and thus resist internal uprisings while also keeping up harassment of shipping along the straight. So I'd say it's fair to consider them a factor in this whole affair.

The phrase is often taken out of context by neocon Americans to show that Iran is hellbent on America's destruction, and thus to justify their highly violent efforts to destroy Iran in turn. Given the context of not just the phrase but these politics surrounding it, I think it actually is meaningful to point out the translation issue, since 'death to America' isn't necessarily proof of what they claim or justification for their own destructive desires/rationale.

Basically it comes across as disingenuous to use the phrase as a basis for wanting to destroy Iran, when idiomatically it's supposedly weaker than it's presented as being. But then again this whole affair is hopelessly mired in bad faith.

I'd say the earlier era seem to have had greater contrasts, so that while Numenor and such have died off, so to have all the evil dragons and whatever that demon in Moria was. Good and evil were more distinct and individually potent, embodied by externalized creatures, whereas, as Middle Earth evolves towards the recognizable world, they collapse more towards a unitary point embodied by individual men (e.g. Boromir and Denethor). That is the main thrust of the matter I perceive, as far as its tendency towards one state or the other.

And if Middle Earth is literally darker, it is because it represents the world as an adult perceives it and not a little boy.

I think the vanishing of the elves and all of that was more to express the author's nostalgia for a preindustrialized past or some such. It's not so much that the world becomes darker over time, but more that the magic goes away. This could also be seen as reflective of childhood nostalgia, perhaps, a theme that was popular in Victorian fiction and which Tolkien was probably influenced by. Of course, these ideas interact with the themes about good vs. evil in certain ways, but I would say that the overall idea presented is more nuanced and indefinite than 'the world becomes increasingly evil'. One of the story points is that while good becomes increasingly degraded, evil does as well, with Sauron being much weaker than Morgoth and so forth. It's an arc from fantasy to mundanity, until evil is represented by your bland cubicle boss.

I was using chatbots to help me train dynamic Japanese vocabulary recall or whatever. Discussing random topics with it wasn't working, as those tended to focus on abstract vocabulary. I had the idea of trying to play RPGs with the AI, with the AI being dungeon master and posing scenarios I would need to use more concrete vocabulary to interact with. Should have been simple, right? It's got all sorts of RPGs in its dataset to crib from. Anyways, the results were one of the many things that totally dispelled any illusion that LLMs possess intelligence for me. I can't provide a total summary of its extreme and numerous flaws, but everything it tried to do as dungeon master ended up being cliché, nonsensical, unstructured, inappropriate from every point of view, and it quickly lost context of what was going on after only a few prompts. You could probably create a much better virtual DM through pre-LLM computer generation technology.

Back during the neoliberalization of the 90's, it was said that one of the results would be more wealth to the upper middle-class professionals and less wealth to hoi polloi. But they justified this on the basis that there would be greater absolute wealth to tax, and thus more government largesse to go around. So the upper middle class could be said to benefit from favorable economic policies, with taxes being a way to partially redistribute some of these uneven benefits.

And ultimately, all economic outcomes are contingent on government policies, so I don't see why policies which directly affect market regulation and such should be treated as special compared to policies regarding taxes and social programs.

I wonder if they're under pressure from higher ups to make stuff using AI? Their bosses have probably been convinced through AI hysteria that extreme gains are possible using AI, and that if those gains aren't materializing for them, it's a problem with how they handle their prompts, rather than it being impossible owing to the deficiencies of the technology. Not wanting to be left behind, they mandate everyone use AI and increase their output in line with what the hypists say is possible, typically an efficiency boost of twenty to one hundred percent, and thus, pointy-haired bosses lacking technical expertise relevant to their companies products, unable to understand the deficiencies of AI output, tank market viability while boosting investor enthusiasm in the short term by playing into popular biases of the financialized scam economy.

The bosses get rich, as do the tech scammers and their affiliates, but the economy inches closer to its doom once the bubble goes pop.

There was a man I spoke to wandering up and down my alley the other day. He was small by nature and shrunken even further by age. He had a walking aid and was somehow managing his stroll even though the entire alley was covered in ice.

Anyway, after he called to me across my backyard, I was engaged by him in a lengthy conversation in which he asked me about the species of pine growing in my yard and told me about how he used to raise dogs for a living when he was younger. He told me that his former best friend at one time killed his favorite pet dog by throwing it down a flight of stairs.

Originally I took him for a homeless person and he seemed a bit off owing to his advanced age, but I still found him an interesting enough person to meet and speak to, and nothing about the experience could have been replicated by AI.

My understanding of the gig economy is that it's a progressive step towards the disenfranchisement of workers caused by their weakening bargaining position as demand for unskilled or semi-skilled labor continues to fall. I do not have an in-depth understanding of it, but it seems to me that many of them occupy precarious positions, accept low wages, and lack many of the benefits workers in the past enjoyed, such as union representation, health care plans, etc., and plus have to take on the burden of supplying their own equipment (cars for Uber drivers, for instance). I think app based employment has essentially undercut the collective bargaining position of workers and empowered the huge, centralized corporations which control them.

Personally I don't find AIs as fun to talk to as any human. To me, they're like an interactive encyclopedia. It is fun to read and learn about stuff, but they can't stand in for the human element, either on the individual level or the level of an entire society or group (like the motte). Ultimately I find them in some sense desirable in terms of their first order effects (helping with research, etc.), but it's their second and third order effects I'm worried about, where I think, as I explain elsewhere, they will kill off large parts of human culture, remap the class system, and generally work towards all the ongoing, negative trends that already seem apparent. In a sense they are a continuation of capitalism and its logic.

Most art was already commodified, and it was commodity artists, not creative artists who got the most brutal axe.

Essentially, contrary to your point about AI having imagination, creativity is the primary skill it lacks. It's basically a machine for producing median outcomes based on its training data, which is about as far away from creativity as you can get.

But for most artists, their jobs were based on providing quotidian, derivative artworks for enterprises that were soulless to begin with. To the extent that creativity was involved in their finished products, it was at a higher level than their own input, i.e. a director or something commissioning preset quotidian assets as a component in their own 'vision', the vision being the creative part of the whole deal.

However, I do believe creative artists will be threatened too. It's a little complicated to get into, but I think creative art depends not just on lone individuals or a consumer market, but on a social and cultural basis of popular enthusiasm and involvement in a given artform. I'm talking about dilettantes, critics, aficionados here. It's a social and cultural pursuit as much as it's an individual or commercial one, and I think that AI will contribute to the withering away of these sorts of underpinnings the same way corporate dominance and other ongoing trends previously have.

So for the artistic field, I envision complete and total commoditized slop produced by machines, once the human spirit has finally been crushed.

Who are these people, exactly?

Internet nerds like us who based their lives around forums, intellectualism, in my case, literature, etc. The new AI world of dopamine cattle harnessed by the tech fiends suggests total obsolescence of any sort of life that isn't fully grounded in the concrete or else enslaved for the purpose of dopamine-slop control. Admittedly, some people here have lives which go beyond the abstract.

I had a somewhat related idea to this. It's relates to ways that middle class professionals could be screwed. I haven't really hammered it out fully, but here's the gist of it. Basically, the value of automating labor is that it allows human resources to be freed up for other tasks. Rather than having one hundred artisans hand tooling goods, you have one machine operating by one engineer producing the same goods and then ninety nine people who can perform tasks in other areas of the economy.

But with AI, there will be an extinction of an entire class of meaningful work. That which is done by the middle class. There aren't adjacent fields for them to move into once displaced, as those will also be taken by AI. Their only options will be to move up or down, into different classes of the economy, and for the vast, vast majority of them, it will be a downwards spiral.

The area below the middle class economy is called the gig economy. So the value of AI is that there will be a wealth of gig workers, and thus fast food can be delivered more cheaply than ever before.

That is the one benefit of AI we are certain about.

There is a hypothetical scenario, a longstanding dream of science fiction, where with infinite labor afforded by AI there will be infinite opulence. However, some points that contest that are 1) there is only so much demand for consumables and market goods and services, so that economic demand begins to be overshadowed by status concerns and non-economic spheres of life in terms of desired things, 2) many of the inputs that go into supplying those goods and services are finite (i.e. resources) and so their creation can't be infinite, 3) political ramifications suggest reduced power and thus leverage for the displaced, and so their economic needs could easily be ignored by those who retain power.

All in all, there looks to be dark times ahead.