@iprayiam3's banner p

iprayiam3


				

				

				
3 followers   follows 0 users  
joined 2023 March 16 23:58:39 UTC

				

User ID: 2267

iprayiam3


				
				
				

				
3 followers   follows 0 users   joined 2023 March 16 23:58:39 UTC

					

No bio...


					

User ID: 2267

My wife has recently given me a little gentle ribbing about my softer than usual belly. We were at the beach last week, and she turned to me and said, "Yeah, seeing all these shirtless men makes, me realize how in shape you actually are."

Point being I agree.

IMO this comment is way too uncharitable...I'd hesitate to call it laziness.

To clear this up, I didn't call it laziness, I just listed that as a possible pragmatic blocker. My point is that it's trivially solvable in technical sense. It's really really easy to think of ways to evaluate students or have them practice learning in scenarios that AI cheating could be mitigated. It's not remotely unsolveable in that sense. But there are, to your point structural and indivdual reasons that make implementing such a solution harder.

I have sympathy for these defenses, but not infinity. If it's something any homeschool parent could solve without any innovation, then the school system needs to be able to react to in order to remain a legitimate concept. We can't just 'oh well...' cheating at scale. It needs to be treated as existential to schooling, if it's really this widespread.

There is no legitimate reason an institution of learning, can remain remotely earnest about it's mission as a concept, and still allow graded, asynchronously written reports.

Now of course many of the blockers to reacting to this are an outgrowth of similar challenges schools have faced for decades: The conflicted, in-tension-with-self mission of schooling in general. as described in the excellent book, Somone Has to Fail. Schools simultaneously trying to be a system of equality and meritocracy will fail at both.

But AI has stopped the buck passing; like so many other things, AI is a forcing funciton of exponential scale. I think if the can gets kicked any further, ever single semester, every single assignment, the entire idea of schooling massively delegitimizes itself.

Cheating with AI in school is trivially solvable on an object level. It’s just that the bureaucracy and or faculty don’t want to.

Whether that’s due to laziness, head in sand, politics, profit, or some sense of “inequity”, or any other misaligned incentive is up for debate.

I assume the inequity part is a decent amount of it. If you start actually forcing measurable accountability, it will take away other subjective safety nets.

This will effect pass rates and almost certainly have some disparate impact.

But the point is that anybody with even a little bit of intelligence could think up a plan to counter AI cheating for any given course or learning objective.

A few walked out in disgust in favor of Hananianism, others embraced rightoid brainworms.

Stop trying to make fetch happen.

I think most people are missing it, but this whole shaggy dog is just to bury another love letter to Hannania.

"What is with him" is that he genuinely believes that his God, through scripture, has commanded him to support Israel, and there are many in the upper echelons of the US government who genuinely and wholeheartedly believe the same thing.

I don't want to go off the deep end speculating on his stated faith, but at first glance, this part felt somewhat post-hoc to me. I don't doubt that his support for Israel is tied to his faith to some degree, but I also doubt that that particular verse is the driver rather than the justification.

It is suspicious to me that he had the verse memorized (and corrected Tucker on the exact wording at one point, to narrow his interpretation even though his quote was not quite right anyway.), and had the 'I learned in Sunday school' framining, but didn't know where in the Bible it was, or provide any additional context outside of the single quoted verse.

It just came off to me like a digestible soundbite to rattle off, rather than the starting point for a developed point of view. I think Tucker sufficiently surfaced this in his pushback, but it didn't come out explicitly.

Search has continued to deteriorate steadily over the past 15 years and this is just more of the same.

Google wants to retain people who asking AI instead of search, which makes sense from their point of view.

But it’s misaligned with the users incentives. If I open google instead of ChatGPT it’s because I want a search not an AI response, nor an ad.

It’s just a terrible experience all around.

I’m not talking about Biden or Pelosi or other democrat leaders. There are many many serious Catholics who are anti Trump and also anti abortion. You can I can discuss whether they are mistaken to keep voting democrat but these people exist in large number.

I am saying that if this guy is hypothetically anti Trump pro immigrant healthcare and anti abortion:

  1. This describes a ton of serious involved Catholics. You are right that they are much less common in trad circles

  2. A reliably large proportion vote democrat. Sure once you start filtering for theological rigidity, they vote more and more a minority vote, but still exist.

  3. Voting pattern aside these folks are much more Blue Tribe than Red Tribe.

  4. This set of views probably describes the most left wing bishops in the US, including ones who are shakey on sex stuff and ones who are solid.

Again, ther is no evidence this guy is Catholic so I’m just playing pattern matching.

no idea what denomination this guy is, but in the Catholic world, prolife, pro-immigration, pro-social justice like healthcare for the poor, anti-Trump is not particularly ideosyncratic. Rather it's extremely common, and a relatively consistent worldview. This probably describes the pope himself, and many priest and bishops in the US.

However, I don't this agree that this maps to 'Red-coded'. I think it's the default left-wing half of Catholicism in America, consistenly votes democrate, and is pretty solidly blue tribe, just not woke.

I know. I am saying then the connection you are making to abundance and the cultural malaise makes less sense. AI (not art specifically or even meaningfully. That was not meant to be causal, just exemplar) will increase the meaning deficit as it removes purpose for a lot of people.

Ok but this is a wholly generalizable dismissal of the ops observation about material malaise within a society of abundance. It doesn’t mean it’s necessarily that it’s wrong. But I think those who recognize the ops observation should consider AI abundance making it worse.

Sure there will still be transcendent art. Just like now there’s plenty of meaningful and spiritually fulfilling lives and communities within the culture. I myself have the latter in spades.

But it can be both true that the potential for any individual or subgroup remains and can be found, while the broader culture deteriorates and the ratio worsens

We're materially better off than ever. We're spiritually dead. We have more freedom than ever. We're trapped in our heads like anxious prisons. We solved hunger, and crippled ourselves with food.

Just wait to see what AI does here, when at best jobs really start getting replaced with UBI.

This is generally my argument against AI art. I’m not moved by the “look at the abundance! Look at what is now possible and accessible at scale!” Genre of argument, like the ones made a few threads down. Because exactly I don’t think they are net positive for any sense of flourishing. It’s a hedonic treadmill. Art thrives under constraint, and the human spirit works similarly.

My biggest disgust here is not about the object level position, but the fact that for the past 2 weeks, MAGA has been pushing all the fiscal irresponsibility of the BBB and slandering any detractor as traitors to the border under a message that this now completely and totally undermines.

None of the argument for raising the debt ceiling or SALT deductions or anything else have any leg to stand on.

Again, the analogy might not be a very good one, but we’re getting hung up on technical comparisons. My analogy was supposed to focus on the social ritual nature of where dividing lines are that focus on discrete moats around the methodology, rather than comparisons of outcome quality.

I can admit that it's not an adequate comparison, but the distinction I'm making is between repurposing existing art (signing a premade card) and outsourcing it to a computer (someone else signs the card for you). I don't think these are directly analagous. I'm not saying they belong in the same category, but the analogy is on the gradient down from personal touch to outsourced sentiment.

I'm not trying to make a generalized defense of lazy album covers. And I fully accept there's an argument as a soldier going on here to mask more utilitarian concern rather than an ontological one. Gun to their head, I'm sure a lot of people criticizing the AI album cover would prefer an interesting AI cover to a lazy repurposed image for a given instance, especially for a 2 bit band. But they are arguing for a moat around actually creative ablum art in general. With the repurposed picture, it can be lazy or unique, but not both.

This is analagous (but not categorically equivalent) to the moat of 'you at least have to sign the Hallmark card yourself'. OBVIOUSLY that's less meaningful than somethign unique and closer in practicality to nothing at all. But the ideosyncratic moat of 'signed card' has social signficiance that defends against a drift into nothing at all.

How would you compare something like content aware fill to inpainting or other AI image techniques?

For the argument of AI, I would not compare based on outcome, or level of effort, because I agree those are somewhat gradients. It is a question of technology used which has clear and unabiguous answers.

As far as I understand it, content aware fill uses ML, but not Generative AI.

So if one is against AI as a general category, then they can make an argument against CAF. Or if they are specifically against Generative AI, they can make an argument for CAF.

My main point is that Unaltered, Digitally altered, CGI, ML, and GenAI are all scrutable categories, not gradients or judgement calls. Now the valance you assign to the categories can be gradient or judgement calls.

But I disagree with the argument that the categories don't discretely exist or we can't to assign valence to them due to equivalency of outcome.

Sure but album cover art is already a Lindy anachronism, and this makes sense to be a place of resistance. Neither albums nor covers really exist anymore. It’s more obligatory ritual than anything else and I think someone faking a ritual is more taboo than someone participating lazily.

It’s like the difference sending a thoughtful thank you note and signing a card and having someone else sign the card for you.

Everyone can agree that the first is superior, but the autist mistakes the second and third for being equivalent.

And the spot that has bugged me for a while now: how much AI/digital assistance is really crossing the arbitrary line you've drawn?

My personal takes aside, how is this an arbitrary or ambiguous line; Whether or not an LLM was employed is pretty black and white.

The argument I keep seeing ends up taking this shape over and over:

AntiAI: LLM different from other tool. LLM bad.

ProAI: LLM not bad. LLM no different from other tools!

Whether or not 'LLM bad', it seems obvious that LLM is qualitatively different from other technology (except perhaps slave labor, but that's a tangent not worth exploring). But what I see most from the ProAI response is not a rebuttal of the leap from different to bad, but of a rebuttal of bad with a denial of different. Whcih I think is the weakest argument for a postive AI position.

See, it’s three where you get the minivan, and 4 is no issue.

It might help that my kids are relatively close in age, but logistics ain’t an issue for us.

I take my son to his piano lesson and watch my toddler, or I take my son to his piano lesson and watch my toddler and a baby, is not significantly differently

There's a lot of serious consideration here, and serious replies; so I'll add something a little more flippant. For me and my wife, our 4th was the easiest marginal change in every way (except the bedroom splitting). YMMV of course.

I originally beleived that DOGE was about systems not goals, and was a way to create a pipeline of influence for Musk side of techbro elites and government power.

From the look at what he was cutting, I can believe it was something more like an attempt at a hype-snowball. If they found quick and obvious and indefensible waste/fraud, while also turning over norms of 'you can just do that?!' slashing, the hope was that there's be snowball momentum to tackle bigger things.

In other words, you come in and start trying to follow all the polite beurocratic processes around suggesting entitlement cuts from an advisory capacity, you will never even get off the ground. Alternatively, you come in and shake shit up by highly publicisizing obvious waste, in efforts of getting populist support and visibility.

I think he actually almost reached escape velocity here, or got as close to a successful system as possible.

Conservatism is not an ideology. It's an orientation. Moreover, it's an orientation against a reference point, (which is why today's conservative is yesterday's liberal etc.)

People confuse this because contrast the term with liberalism, which can mean two things.

  1. Is just the opposite orientation of conservatism, and
  2. is an actual ideology - prioritization of safeguarding individual freedom and equal rights through rule-of-law and representative government

Most of the useless polticial showerthoughting is downstream of the confusion caused by the fact that the word liberal can refer to either, which conservative can't.

Both the complaints of liberals not being liberal, or conservatives not being ideological, or of assuming conservatives are ideologically illiberal etc.

At the end of the day, both American Conservativism and Liberalism are big tents each containing both liberals and illiberals.

70% sure, maybe. But what happens if it's 'just' 2008 levels of sudden disruption? And then a small stagnant window before another dive. I am more worried about falling into a series of local minima, where the immediate 'solutions' get us into a worse scenario.

In some respects 70%+ emplyment disruption, or a skynet scenario could be better, in creating a clear, wide consensus on the problem and necessary reaction. I am more worried about a series of wiley cyote getting over a cliff before he realizes it, falling, then repeating as he tried to get ahead of the next immediate shift.

No I think I was unclear, yes this is in line with what I'm seeing. When I said sr. mgmt doesn't realize the impact, I mean the follow through logic of the macro effects of every other company also freezing spending and hiring.

No I know. Of course that’s the biggest part of it. My overall point is I’m seeing uncertainty expressed in AI uncertainty, whether that’s just a rebundling of tariff etc uncertainty or not, my fear is that it is contributing to increased general uncertainty, which will be additive economic results trending from that uncertainty

but that might as well be a result of cutbacks due to economic uncertainty.

But this uncertainty is what I’m interested in. How much is effect but how much could snowball into cause? Buyers get skidding, forecasts go down, and so forth. I’m not saying it’s the leading cause of uncertainty to anywhere near it. But I am noticing it becoming a contributing factor