@Rov_Scam's banner p

Rov_Scam


				

				

				
2 followers   follows 0 users  
joined 2022 September 05 12:51:13 UTC

				

User ID: 554

Rov_Scam


				
				
				

				
2 followers   follows 0 users   joined 2022 September 05 12:51:13 UTC

					

No bio...


					

User ID: 554

I don't think Twitter has much to do with it. The night before Thanksgiving, I attended a Taylor Swift trivia night that my cousin's boyfriend convinced me to attend because it was at a local brewery. The vast majority of the attendees weren't the typical brewery clientele, but suburban moms and their young daughters. Not too many men. And the place was absolutely packed; there were at least 20 teams. I guarantee you very few of these people have Twitter accounts, or care too much about Elon Musk. I attribute Swift's sudden blowup to the following factors:

  1. She was already very famous. This may seem obvious but it seems like there's more staying power when an already famous person reaches this level of popularity compared with the meteoric rise of an unknown. She's 34 years old and has been in the public eye for nearly 20 years; there's no sense that she's the flavor of the month.

  2. She has a history of making risky professional moves that have the potential to wreck her career but end up bolstering it. In 2014 there was some serious discussion as to whether she'd be able to appeal to the pop market in the same way she appealed to the country market. There have long been country stars with crossover appeal, but most of them never stop ostensibly being country musicians, no matter how pop they get. The only other musician I can think of who pulled this off was Linda Ronstadt, but she gets an asterisk because she was at the fringes of the country world; she came out of the more rock-oriented Laurel Canyon scene rather than being a product of Nashville. I think a big part of the reason Nashville artists are hesitant to break out like this is because country is a sort of security blanket. The country world wants something that's ostensibly country, and they will loyally buy it if it's marketed as such. Making a full transition out of Nashville means casting off the last vestiges of this to make it in the wider world. You run the risk of losing your old audience and failing to find a new one. But she correctly calculated that the country fans who were buying her music were probably already buying pop records anyway, and that her pop audience was where all the growth was. And when I say she I mean whoever does her marketing. So she managed to get two audiences for the price of one, so to speak.

  3. Then — and people often forget about this — she pulled her music off of streaming services because she didn't like the business model. For three years. I'm not going to attempt to quantize the impact this had, but I doubt it did her career any favors in the short-term. However, it probably helped her career long-term, because it encouraged people to buy her albums rather than stream them. This probably fostered a sense of loyalty that she wouldn't have had if she'd been available at the touch of a button to anyone with a Spotify account. And then it was a big deal when she got back on the streaming services, which again increased her audience.

  4. So at this point she's been steadily consolidating her power for over a decade. This is important in and of itself because most pop stars don't stay on top for that long, especially just by being pop stars. Contrast this with Lady Gaga, who is still famous but more because she did things like movies and albums with Tony Bennett. No one has cared about her pop records since 2011. The fact that Swift is in her mid-30s and has been able to sustain a career since her days as a teen idol without making any major changes is an accomplishment in and of itself and probably feeds into our current moment. She's been around long enough that women who listened to her in high school can take their kids to her concerts.

  5. Despite her fame, and her numerous celebrity relationships, she's managed to avoid the kind of scandals and tabloid gossip that surrounds other pop stars, especially ones who become famous at sixteen and have to navigate the transition to adulthood while in the public eye.

  6. She has an uncanny knack for making decisions that are totally about money and convincing people that they're not about money. The whole "Taylor's Version" thing is a prime example. She didn't like the fact that she didn't own the rights to her old recordings. The main advantage of owning the rights to her recordings is that she can collect all the money they generate. Otherwise, there's no real advantage. This is a big deal for most people, but for someone like Swift, who has more money than she's ever going to be able to spend, the schlubs at whatever private equity firm owns the rights to them probably need the money more than she does. But she casts it as a matter of principle, rerecords new versions she owns the rights to, and convinces her fans to shell out money for five different collectors' editions of the same albums they already own. The whole thing was about as transparent a cash grab as you could find, yet she pulled it off in such a way that even people who could care less about her career thought it was a slick move to stick it to those fatcats. It got her the kind of publicity you can't buy while minting her a pretty penny.

  7. And, finally, in the same vein, we have the Eras Tour. At some point in every pop star's life, there comes a point where they are no longer a "frontline artist", by which I mean a contemporary artist who makes contemporary music for a contemporary audience. At some point, people don't go to your concerts to hear the new album but to hear the old favorites. It's usually the obvious sign that a band is over the hill — there's a new album out and the kind of people who paid 70 bucks to hear you play don't give a fuck. And if your biggest fans no longer care... Becoming an oldies act is depressing. Bob Dylan and Neil Young have defiantly refused to go down that path, regardless of the crap they take for it, and insist on being contemporary musicians who will tour the new album and maybe throw in a few old favorites. Mike Love's insistence on the Beach Boys playing touring their 60s hits in the wake of the compilation album Endless Summer's success in the mid-1970s drove a wedge between the band that they never really recovered from. (And most of the band was younger then than Swift is now.) The huge appeal of the Eras Tour was that, for the first time, Swift would be taking listeners on a musical journey through her entire career. She was becoming an oldies act, proudly and deliberately. At a time when she was still viable as a frontline artist. This is almost unheard of. Sure, contemporary bands usually play some older material at all their shows, but it's unusual for someone to actively embrace what is usually the sure sign of a has-been. Because the dirty secret of oldies acts is that they're very profitable. People like hearing old favorites, even when they're still willing to pay good money for the new shit. And the whole Taylor's Version thing was perfect cover. Combine this with the fact that she hadn't toured in half a decade and the stage was set for all hell to break loose.

I think it's worth recapping American political history during the period during which Millennials became politically aware. While there was contention surrounding the election of George W. Bush, things went back to normal pretty quickly. The most exciting thing to happen during the early Bush administration was the Hainan Island Incident, and that was viewed by the media more as a test to how the president would respond rather than a serious culture war item. Then 9/11 happened, and Bush became incredibly popular, even among liberals. These high approval ratings would slowly atrophy over the next 2 years but were still around 50% at the time of the 2004 election, which he won by a decent margin. But this wasn't enough to stanch the bleeding. While the Iraq War is largely blamed for this downfall, particularly the unexpected insurgency and misconduct issues like Abu Ghraib, these only seemed to alienate liberals. What did him in among Republicans was a series of unfortunate events that occurred in the fall of 2005—the insufficient response to Hurricane Katrina, the Harriet Miers Supreme Court nomination debacle (Miers was a close associate of Bush whose qualifications for the court were highly suspect, and the nomination was withdrawn in the face of bipartisan criticism), the Social Security privatization plan, the Medicare Part D rollout, and the Plame Affair (which resulted in the indictment of the Vice President's National Security Advisor and implicated Bush's Deputy Chief of Staff Karl Rove). Any of these incidents wouldn't have been more than a minor scandal (particularly the Part D rollout, as problems are to be expected when introducing a complicated new government program), but since they all happened within a span of weeks they made the whole administration look incompetent. By the 2006 midterms even staunch Republicans had begun distancing themselves from Bush, and he spent the last years of his term as a sort of zombie that everyone hated but nobody really cared about. By the time of the 2008 financial crisis he was already so unpopular that it didn't seem to effect him much, especially with everyone's eyes on the next election.

So now we come to the 2008 election. Every pundit agrees that the Republicans need to move on from Bush and the neocons (though it should be mentioned that Bush wasn't a neocon himself), but there is disagreement on which direction the party should take. And by disagreement I mean that nobody has a fucking clue. Most Republicans in the primary try to distance themselves from Bush but endorse similar policies. There are two outliers. The first is Mike Huckabee, the former Arkansas governor who represents the voice of the Bible Belt. The guy has no money or institutional support but makes a splash because Evangelical Christians had been rising as an electoral force for decades, before finding a kindred spirit in Bush. They have now proven that they are a constituency that can't be ignored, but the traditional GOP base has no room for someone as blatantly theocratic as Huckabee. The other is John McCain, who has staked out territory as a "Maverick" by bucking his own party over the past fifteen years, but still being incredibly conservative in other areas. He wins the nomination but suffers from three critical weaknesses: The first is that he wants to send more troops to Iraq. The second was that the GOP was in the doghouse and he was running against a younger, much more charismatic, Barack Obama. These were important at the time but have little relevance to your question. The more salient problem, though, was that he picked Sarah Palin as his running mate. Palin initially seemed like a good choice—his campaign was already at a disadvantage so picking a woman with executive experience might win him some votes, and her lack of national prominence meant she had few enemies or skeletons in her closet. The problem was that she used her role in the spotlight to blatantly wage the culture war while demonstrating that she lacked basic policy knowledge. When newscaster Katie Couric asked her which newspapers and magazines she read, her response was "all of them", a response she refused to clarify upon further inquiry. Centrists who feared that Obama's superstar status was a mask for his lack of experience and vague policy proposals now found they couldn't vote for McCain, as it would put a demagogue like Palin one heartbeat away from the presidency. McCain lost in a landslide.

Now it's 2009 and while McCain is back in the Senate like nothing happened, Palin and Huckabee are on speaking tours in an attempt to stoke the flames of the culture war. The Tea Party has come into existence, a loose movement that is ostensibly in favor of returning to the libertarian principles of the Founding Fathers but is in reality a lowest-common-denominator culture war movement. The salient feature of the Tea Party is that they aren't just opposed to Obama and the liberals, but also to Establishment Republicans, who they brand "RINOS" (Republicans in name only) and blame them for enabling the liberal agenda. Over the next several elections, numerous Tea Party backed candidates will be elected to office, many of them replacing more moderate Republican forebears. In 2012 the Republicans nominated Mitt Romney to challenge Obama. Romney only won the nomination after a slogfest with approximately 742 other candidates, most of whom were culture warrior flashes in the pan like Tim Pawlenty and Michelle Bachman. Romney himself was a traditional New England Republican who had served as governor of a liberal state. But in the political environment of the time, he had to pay lip service to more traditional conservative ideas. This put him squarely in a position where he had no real chance of winning; he was too traditionally conservative to win over liberals who were tiring of Obama, and too close to the Republican Establishment to inspire anyone on the fringes. It was an election of two boring candidates, and to the incumbent went the spoils.

Given that Tea Party rhetoric seemed to be paying better electoral dividends than traditional Republicanism, candidates for the 2016 Republican nomination would all have to move in that direction. The problem with Tea Party rhetoric, as I alluded to earlier, was that it seemed geared to primarily stoke the culture war. It was ostensibly libertarian, but not in any truly principled way, only to the extent that it would serve culture war ends. So taxes and regulation were obviously bad, but not to the extent that anyone would promote policies that would actually impact anyone. Keep the government out of my Medicare. What's more important is that you brand Democrats as socialists for proposing any additional spending. Call for tax cuts and a reduced deficit but make no attempt to touch programs that are actually expensive, just programs that your opponents pushed through. Add in a healthy dose of Judeo-Christian reverence (to appease the Huckabee camp) and nationalism. Almost every GOP candidate in 2016 was running on some variation of this theme, but Trump found the magic formula—he ditched principle altogether. All the traditional politicians had tried to incorporate the new ideas into a consistent platform. Trump just went for applause lines. Back in 2007, Colorado Congressman Tom Tancredo ran for president on a campaign of reducing immigration and kicking out illegals. It went nowhere. Looking back at his old speeches, it's clear that his problem was that he made actual, principled arguments against immigration. Trump knew that there was little call for that. It's much easier to say that the Mexican government is sending rapists and that building a wall will cure all our ills, and tell your critics to piss off rather than try to actually address their concerns.

I'll stop there because we all now what has happened since then and it's more current events than history. The point is that since the oldest Millennials came of age there hasn't been a time when it's been attractive to become a conservative, and the prospect has gotten continually worse as they've gotten older. During the early 2000s, the primary criticism of Bush had to do with the Iraq War. Now that our Middle Eastern adventures have ended, it wouldn't surprise me if some older Millennials turned to traditional neoconservatism as an antidote to contemporary progressive politics. The problem is that the Republican party has spent the past 15 years distancing itself from neoconservatism and making all the old favorites never-Trumpers. The party has come to represent few of the things more moderate liberals find attractive about some conservative candidates and nearly all the things they find repulsive about them. Contrast this with Boomers; if you were 30 in 1980 you spent your early adulthood in a dismal 1970s economy and probably staked a lot of hope in Jimmy Carter. After his lone term is worse than anyone can imagine a fresh conservative party comes in with new ideas and by 1984 makes the '70s a distant memory. Or imagine you're a Gen Xer, who came of age at a time when Clinton became one of the most successful presidents in recent memory by outflanking his opposition on the right. So far in the 21st Century, the Republicans have yet to produce the kind of Reagan/Clinton figure who wins reelection easily and leaves office at the height of his popularity. The Republicans have been trying to reinvent themselves for the past 15 years, and until that happens, it's going to be very difficult for someone who started off as liberal shift to conservative. For Millennials, that ship may have already sailed.

A couple weeks ago I dropped a couple of names on some people whose ages ranged between mid-30s and mid-60s and I was met with blank stares. Even after explaining who the people were, everyone was still drawing a blank. The names were Chandra Levy and Gary Condit. For those who are unaware, Chandra Levy was a US Department of Corrections employee who disappeared in the spring of 2001. Her disappearance made national news when it was revealed that she had previously been an intern for California Congressman Gary Condit, and there was evidence that they had had an affair. There was never anything approaching evidence that he was involved in her disappearance, but his continued denial of any intimate relationship in the face of nearly overwhelming evidence gave him the aura of a man who wasn't telling the truth, and speculation ensued.

If you're too young to remember the case, I'm bringing it up because it was huge at the time. The New York Times ran over 50 stories about it between May and September. To put this in context, the other big news stories during that period were the Microsoft antitrust suit, the Bush tax cuts, the Andrea Yates child-drowning case, and the president's monthlong August vacation. It's hard to gauge the coverage of most of these, but Yates merited fewer than 20 articles, and the rest of these weren't exactly corkers. The Levy disappearance was easily the biggest news story of that summer, until 9/11 pushed it off of the front page. Even then, it had enough staying power to remain in the background for years afterward, as new developments arose. Condit sought reelection but lost the primary in March. The body was found in May. A man who had previously been convicted of attacking other women in the area where the body was found was convicted in 2006, but was released ten years later after appeals revealed that the prosecution's case was terrible. As recently as last year, the Times was still following the case, this time about how the prosecutors are facing malicious prosecution charges.

It was a big story. It may have only dominated the public consciousness because it was the only interesting thing in an otherwise uninteresting time, but it dominated nonetheless. It's no longer front-page news, but developments still merit mention by the Newspaper of Record. And yet plenty of people who were certainly old enough to remember draw a blank 20 years later. The same is true of the 1979 Ogaden War, or the Bhopal disaster; it seems to have vanished from the collective consciousness, apart from the aforementioned updates and the occasional podcast dedicated to these sorts of things. Now imagine trying to explain to someone how big a news story was a hundred years after the fact. Are you familiar with the Hall-Mills murder? It was easily the biggest murder story in American history until the Lindbergh Kidnapping, and was much bigger than any popular crime story since the OJ Trial. Yet today it only gets a mention in true crime books and podcasts and such. If someone frozen in 1922 were to wake up today and asked about the resolution of the case, he may be incredulous to find out that no one has any idea what he's talking about. Even big political events barely merit discussion. Teapot Dome may be mentioned in every US history book, but good luck finding anyone who can explain what the scandal was (and it was one that jeopardized Harding's presidency, though he would die before it was resolved). So no, there's no one article you can point to that will fully express the magnitude of an issue to someone 100 years in the future.

Semi-related probably Friday Fun Thread Material But It Fits So I'm Posting It Here Anyway: A couple years ago I crashed a billionaire-adjacent wedding. To avoid burying the lede, it was this wedding, which, being a flamboyantly gay wedding was a lot kitschier than anything Bezos could ever dream of. The lucky groom was 84 Lumber magnate Joe Hardy's grandson, and was held at Nemacolin Woodlands Resort, which resort was owned by Mr. Hardy and is now managed by said groom's mother, and which I'm surprised hasn't changed much since Mr. Hardy's death since it was a vanity project that lost money and that his daughter was supposedly planning on changing to make profitable after the old man kicked.

ANYWAY, I serve on the board of a nonprofit that was having our annual kickoff party at a nearby bar and was attended by a friend of ours who happens to work at the resort. My friends and I had no idea about this wedding, but our friend was talking about how he worked long hours getting ready for this elaborate event, the point of which was to avoid actually having to work the event, and mentioned a few details like that it was taking place at a certain golf hole. It was at this point that someone, possibly me, suggested that we should crash the event. Although the resort wants you to think otherwise, most of the roads on what appear to be the resort grounds are public, as there are several in-parcels with private houses on them beyond the front gates. It would be trivially easy to park alongside the golf course and sneak into the wedding, especially after dark.

No dice, our friend said, while the ceremony itself was at the hole, that had already taken place in the morning, and the actual reception was being held in a tent at a different part of the golf course, and it wouldn't be possible to just slip inside unnoticed. It was at this point that the plan began to crystalize. Outside would have actually been worse, since it was early June and didn't get dark until after 9 pm. Our attempts to pump him for information were only marginally successful, as he was under strict orders of confidentiality and only revealed the location of the ceremony because it had already happened that morning. We reminded him that he was leaving his position in a month as he had just passed his home inspector's test, but he wouldn't budge. Luckily, I had already established that the festivities were expected to go rather late into the night, but weren't starting any later than normal, so we figured 8 pm would be the ideal time to go.

My plan took advantage of one simple idea: Act like you're supposed to be there. The problematic thing about a wedding like this, though, is that it's a sit-down dinner with a strict guest list that's been planned and executed in secrecy precisely to keep people like us away from the thing. But, do to our unique circumstances, this presented an opportunity. While acting like you're supposed to be there is essential, it isn't always enough. We also needed a plausible reason to be there; simply saying my name and demanding entry probably wouldn't work. So that gets us to the third thing we could take advantage of, that these billionaire events always have lots of people involved, both as guests and as staff. Our being admitted wouldn't be dependent on getting past the host or hostess, but getting past somebody who ostensibly knows who is supposed to be there but realistically can't pick any of the guests out of a police lineup.

The one snag was that our event didn't end until five, and as board members we couldn't just leave. I happened to live an hour away, optimistically, from both the event venue and the wedding venue, more like 60–90 minutes, and the cover story I had in mind wouldn't work if we got there too late, and I didn't happen to bring a suit with me when I left the house that morning. One of the participating couples that lived close said I could just shower at their house, but that didn't solve the suit problem, and going home and coming back would be a tight squeeze that might hold up everyone else. At first, I saw no way around this problem, until I realized that I didn't have a date. So I frantically began calling women I knew to see if they were interested in crashing a billionaire wedding on short notice, if you happen to be free tonight, and also wouldn't mind stopping by my house and rooting around for suitable clothing. Luckily, this is where having a good bartender comes in handy, and since I knew she was off that night she was thrilled to engage in a bit of semi-illegal fun.

Shortly thereafter, having made a serious omission, called my friend back and instructed her to stop at the liquor store and pick up a bottle of Jim Beam, two handles of Vladimir vodka, and a bottle of the most ridiculous liquor she could find that wasn't super expensive. She was then to go to Dollar Tree and get cards, two gift bags, tissue paper, and delicate wrapping paper. By the time she arrived two of us had showered and the third was in there and would be putting on her face soon, giving my date plenty of time to shower and get ready herself. In the meantime, the we put the Vladdy in a large box and wrapped it, and put the Beam and the other bottle in the gift bags. To my friend's credit she picked up Slivovitz, which was such an obvious choice that I was embarrassed that I hadn't thought of that myself. For those not aware, it's a plum brandy that's behind the bar at every hunky bar in Pittsburgh that nobody ever drinks except on a dare. We then filled out the cards in the most ridiculous way possible. Mine was full of Yiddishisms and sentences like "Your cousin Nathan is going to be a pharmacist. Good money in that." My gift of choice would have been a set of towels that said "His" and "His", but we were unfortunately under a time crunch. The third couple arrived and we all piled into my friend's 2004 Lexus SUV that he ironically brags to everyone about owning, figuring that a. We can all fit, and b. If we have trouble getting in, he can say "Did I mention I own a Lexus?"

We got there a little after 8. It being light out was a better break than we'd originally thought; since we didn't know where the tent was, it was much easier to drive around looking for it fully exposed without headlights making us more noticeable from a distance. We located the tent and found a place to park. The first hurdle came when it became readily apparent that most of the guests were staying at the hotel and that they were shuttling them back and forth in golf carts. Minor detail; the cover story takes care of that. Just keep going. Act like you're supposed to be here.

We arrive at the entrance to the tent, which is of course heavily guarded by black-clad hospitality employees with walkie talkies. "Hi, Rov_scam and guest". I give my real name, which the guy is frantically looking through the clipboard and not finding. My friends give their names, which of course also aren't on the list. This was the first point that I considered that giving three uninvited names in a row might raise some alarm bells, but no worries, act like you're supposed to be there. "You know what, we're coming from the Schwa Foundation fundraiser and we left notes with the RSVPs that we wouldn't be eating dinner. That might be why there's a mixup." I had actually thought of this well beforehand, but it seemed to allay the guy's concerns. "I'm sorry, but none of you are on the list."

At this point, the weaker-willed among us might have given up. The odds were stacked against us. We had just given three names that weren't on the list and a cockamamie story about why we were late. This guy was in no position to let us in. But one thing I do not stand for is being denied access. Asked to leave? All the time. Escorted from the premises? Almost weekly. You can keep the jeans if you promise not to come back to this store? More than once. But I will at least afford myself the opportunity to be thrown out. "Well, I don't know what to tell you," I said, standing there, my date holding a gift bag and two other couples with us similarly situated. Act like your supposed to be here. Someone who was actually invited wouldn't just leave because they weren't on some list. He gets on his walkie talkie and a woman who looks like a supervisor comes over. He explains that we aren't on the list, and looks relieved that this conundrum is out of his hands. I explain everything to the woman, this time adding that I'm on the board of the Schwa Foundation, my friend is on the board of another nonprofit that she may have heard of (which he is), and my other friend is associated with the local tourist bureau, which she is for the next two weeks before she gets canned in a shakeup.

If you know anything about Joe Hardy, it's that he wants to die broke and that he will do practically anything for Fayette County, the poorest county in Pennsylvania. It would be perfectly understandable if he took his money and bought an estate in some old-money suburb like Fox Chapel (where he could hobnob with John Kerry and Theresa Heinz) or Sewickley Heights (where he could hobnob with Mario Lemieux), but instead he lives in a house on his resort, that may be an unprofitable vanity project but one driven by his desire for Fayette County to have a five star resort. He served a term as commissioner, which is like Donald Trump serving on Palm Beach city council or some other local government position that's all work and no prestige. The idea that we might have some legitimate connection to Mr. Hardy's philanthropic activities wasn't beyond the realm of possibility. Actually, his daughter had given us a reasonably generous donation, though it was officially on behalf of the resort, and we never actually met with her.

At this point, it's clear that the supervisor is in a serious bind. There are three options, none of them particularly great. The most obvious option would be to engage the hostess to verify that these were legitimate guests who had been omitted from the list by mistake. Unfortunately, this would mean interrupting Ms. Hardy-Knox in the middle of her son's wedding reception through a tacit admission that her own staff is unable to control something as simple as a guestlist. Even worse, this party was planned under the strictest confidence. The fact that six random bozos were even able to get this close and that she briefly considered letting them in and went so far as interrupting her evening to be sure. It meant that someone had loose lips and various heads would surely be rolling down the fairway the following morning.

The second option would be to simply state unequivocally that we weren't on the list and that if we didn't leave immediately security would be involved. This also isn't a very attractive option. Remember, this event is super secret and the fact that we even know about it means it's highly likely that we were actually invited. We both look and act like we're supposed to be there. We're involved in organization that would plausibly get a token invitation. We have a plausible cover story for being late. For all this woman knows, we are six duly invited guests, three of whom are prominent members of the local community, who went to great lengths to attend, and by categorically denying us entry they would be causing Ms. Hardy-Knox a significant degree of personal humiliation and she would end up having to spend the following week apologizing on behalf of her staff, Nemacolin Woodlands Resort, and practically the entire 84 Lumber Corporation, ensuring us that various heads were as we speak rolling down the fairway, not to mention the fact that someone on the event planning staff must have fucked up royally to omit our names from the guestlist just because we weren't eating.

Or, they could, of course, just let us in. Remember, this event is super secret and the fact that we even know about it means we're probably invited. Besides, we're Acting Like We're Supposed to Be There. We come bearing gifts. We're standing there patiently, sympathetic to the conundrum we're putting this woman in. What's the worst that could happen if she lets us in? We're all above the age of 35 and don't look like the kind of demographic that would get drunk and cause a scene. It's dark inside, and loud inside, and Ms. Hardy-Knox may have been imbibing, and there are literally hundreds of people there, and it's highly unlikely that our hostess recognizes all of them personally.

So she let us in, because, when it comes down to it, what choice did she really have? What's the worst case scenario for us? She asks us who we are, and we give her our real names and positions. And at that point she doesn't know that we weren't on the list and either assumes we were legitimate guests or were invited by mistake. In the event she asks us to leave, we at first act incredulous that we're being asked to leave a party we were invited to for no reason, but we eventually comply. Luckily, this never came up. She did approach us as we were leaving and made small talk and it was pretty clear she wasn't entirely sure who we were but she was very nice nonetheless and thanked us for coming.

The party itself? It was dope, as the kids say. It seems like over the past 30 years there's been an arms race in middle class weddings, where what was once a buffet dinner at a fire hall is now a plated dinner at a special wedding venue with assigned seats and appetizers a waiter brings around. But as much as the doctors, and lawyers, and engineers of the world may break the bank for their special day, they will never even come close to what you can do when money is absolutely no object. For instance, the article only shows a couple pictures from the actual reception, and it looks like those were taken at some point before I weaseled my way in. It mentions some DJ as entertainment, but also has a picture of a stage with instruments on it. The other super top-secret thing about this wedding that no one was supposed to know about and that even the photographer for Vogue had to keep under wraps was that the entertainment for the evening was actually Lady Gaga. Performing for a few hundred people, in a tent. I don't even like Lady Gaga, but I'll admit it was pretty special, especially once I was convinced that armed guards with earpieces weren't about to escort me off the premises. I don't want to suggest that all billionaire weddings are this fun, because the over-the-top gayness had something to do with it, as did the fact that most of the guests weren't the rich and famous but friends and family and other semi-prominent people from Fayette County. So yeah, I did that, and it was awesome.

The Al-Jazeera article linked below gives a decent overview, but it's surface-level. I'll try to give a more in-depth summary. After WWII, there was significant local resistance to the traditional Middle-Eastern monarchies. These were seen as decadent, old-fashioned stooges to Western sugar daddies. Arab Nationalism, and related ideologies like Nasserism and Ba-athism, sought to cast off the yoke of these monarchies and institute modern, socialist-leaning, authoritarian governments that wouldn't be afraid to play the US and USSR against each other. Egypt was the most dramatic of these. Nasser skillfully kicked out both the monarchy and the British, and while Western governments initially had confidence in him as a reformer, he was soon regarded as a loose cannon; when the US refused to sell him arms for use against Israel, he had no problem turning to the USSR, who had no problem accommodating him. Iran would have its own shot at this in the early 1950s, which was famously cut short by the US's own covert restoration of the monarchy.

Nasser's own pan-Arab dreams would lead him to advocate for similar revolutions in other countries. Iraq and Syria would see their own revolutions, and Aden would kick out the British along similar lines. But monarchies still remained, most notably Saudi Arabia, whose close ties with the United States were regarded as suspect. When dispossessed Palestinians formed the PLO in 1964, they looked to the Pan-Arab revolutionaries for inspiration. It was a nationalist movement, but it was also socialist.

When Israel occupied the West Bank following the 6-day war in 1967, the PLO was forced into Jordan, from where they staged terrorist attacks into Israel. The problem was that Jordan was a monarchy under King Hussein. The other problem was that while Jordan was officially at war with Israel, Hussein was a pragmatist who enjoyed good relations with the United States, and he didn't like the idea of the PLO turning his country into a terrorist state. The last straw came with the PLO's attempted assassination of Hussein and overthrow of the Jordanian government in 1970. Jordanian troops would expel the PLO, who then took up in Lebanon.

So now the PLO is in southern Lebanon, and Yassir Arafat is gaining notoriety as the world's preeminent Arab terrorist. The situation is much the same as it was in Jordan, except that the Lebanese government is a mess and isn't equipped to do anything about it, giving the PLO essentially free-reign in the South. When Lebanon erupts into civil war in 1975, Israel, who supported the existing Maronite government, took the opportunity to invade and establish a buffer zone. While they got their buffer zone, it didn't eliminate the PLO, but drove them further north. By 1981, and international peacekeeping force had brokered a ceasefire agreement, which ended the war but left a peacekeeping force in place.

In the meantime, Pan-Arabism was on its last legs. Following Nasser's death, Anwar Sadat took control of Egypt in the early 1970s. With Soviet help, he took one last shot at Israel in the Yom Kippur War, but was soundly defeated. Realizing that the only hope at regaining any of the lost territory was through a negotiated settlement, he agreed to the Camp David Accords in 1978. While this didn't mean the immediate fall of the other secular states, it cast a pall on the movement as a whole. Egypt would no longer be the alpha dog in the region.

But who would? Among the remaining secular states, Iraq was the most obvious candidate, with its central location, large population, and large army. In a couple years Sadaam Hussein would rise to power in an attempt to assert this vision. Syria was small and was wrapped up in wars in Israel and Lebanon it couldn't win. Jordan had its own Israel problems; while officially at war, Hussein was too pragmatic about his relationship with the country to be openly hostile. The other monarchies were small and weak, and some were barely independent. The one that wasn't was Saudi Arabia, awash in American arms and domestic oil money. But as a monarchy, it had a credibility problem similar to Egypt's. The ruling family was significantly more conservative than most of the various Kings and Emirs, and while this meant they didn't seem as decadent as the others, it did make them seem more old-fashioned. It would be hard to unite the people around a King, of all things.

And then there was Iran. Persian where the rest of the region is Arab, Shia where the rest of the region is Sunni. Still one of the monarchies, but things are changing. An exiled Ayatollah has found something for the people to cling to that's a far cry from Pan-Arabism: Religious fervor. Guys like Nasser saw this kind of thing as detrimental to their countries' modernization, but by 1979, its day had come. Kohmeni would swoop into Tehran and depose the Shah, instituting his own ideal form of religious-led government. I'm going to assume you know about the Iranian revolution so I won't recount the story here. But there was a lesser-known revolution in Saudi Arabia around the same time. In the wake of the Ayatollah taking power, Juhayman al-Otaybi and a group of 600 fanatics seized the Grand Mosque at Mecca in an attempt to overthrow the House of Saud. The attempt was unsuccessful, but it spooked the royal family enough that they abandoned the meager steps they had taken towards modernization in favor of an increasingly Islamist policy.

By the early 1980s, there were three powers squaring off to dominate the region: Saudi Arabia, Iraq, and Iran. Iraq, sensing weakness in the chaos surrounding the Iranian Revolution, struck first, invading Iran in 1980. Meanwhile, Israel invaded Lebanon again in 1982, laying siege to Beirut, in an attempt to drive out the PLO for good. By the end of the year, Arafat agreed to move operations to Tunis, far out of striking distance of Israel. But that didn't solve Lebanon's problems. Shiites in the south had become resentful of the constant occupations, whether from the PLO, Israel, or international peacekeepers. This resentment culminated in the 1983 bombing of the American embassy in Beirut and the formation of Hezbollah.

Iraq, having committed itself to a war that was looking increasingly like a stalemate, and not being too keen on the whole religious fanaticism thing, was looking less and less like the new alpha dog. Iran's chance didn't look much better. It was bogged down in the war itself, and it would be hard to find followers of Shiites in a region that was overwhelmingly Sunni. There were plenty of Shiites in Iraq, but the situation on the ground made it inconceivable that Iran would be able to draw them into its sphere. But Iran did have one advantage that Saudi Arabia didn't. In 1983, Egypt was at peace with Israel, and Hussein was unwilling to get too involved. Assad in Syria blamed Israel for everything, but he was a secular Ba'athist and his military situation wasn't great. But now there was Hezbollah, Shiites in a land of Sunnis, in perfect position to pick up where the PLO left off before being exiled to Tunis.

So Iran decided to become Hezbollah's sugar daddy. This became readily apparent to the United States relatively early. As Hezbollah started taking Americans hostages in the 1980s, it became clear to negotiators pretty quickly that they took their marching orders from Iran (the Iran-Contra Affair was an attempt to negotiate the release of these hostages). As the power struggle between Iran and Saudi Arabia has grown more acute over the decades, Iran has used its position as a supporter of Iran and enemy of Israel to gain support among the wider region. Consider the Abraham Accords. The basic idea behind them is that if Muslim-majority countries establish diplomatic relations with Israel, it will isolate Palestinian hardliners and force them to the negotiating table. The one potential weakness in such a policy is that, while the governments of these countries know that peace with Israel benefits them in the long run, the position is still wildly unpopular among the Arab public.

Iran knows that keeping the Israeli-Palestinian conflict going on as long as possible is to its long-term benefit. While I agree with the Trump's policy in this area generally, I shake my head when he or Jared Kushner says that the October 7 attacks wouldn't have happened had he been president. Biden continued Trump's diplomatic policy in the region, and a year ago it looked like Saudi Arabia would be establishing relations with Israel in the not-too-distant future. October 7 provoked a response from Israel that made any chance of recognition politically impossible. A policy of alienating Hamas terrorists has been replaced by a policy of simply eliminating them. The more support Iran can give to those who are on the front lines, the more credibility they build with the Arab public, while Saudi Arabia, beholden to the United States, is forced to stand aloof. They're also far enough away from Israel that the risk of direct conflict is relatively low. This is why Israel assassinated Haniyeh in Iran. Beyond being a high-value target, it sends a message — You're not safe. We can waltz into your country any time and kill anyone we want to, and there's nothing you can do about it. Lob all the poorly-guided missiles you want to.

Whether this strategy plays out for Iran is anyone's guess. Power politics has completely overtaken religious fundamentalism. Saudi Arabia is liberalizing, and the more extreme fanaticism of Al-Qaeda and ISIL has given the movement a bad name locally in some of these places. After 45 years, Iran's sphere of influence is limited to Hezbollah, Yemen, parts of Iraq, and Hamas, and the last of those is very recent and not exactly in a good position right now. The Saudis, meanwhile, have all the weapons and all the money. They have the West; Iran has Russia and North Korea. They've also seen internal resistance in recent years that, while it was never close to bringing down the government, was much more than Saudi Arabia has had to deal with.

I honestly don't know how a guy who derisively refers to Harvard graduates as mere "midwits" can fail to recognize that the GTP's responses are crafted in much the same way as those of a political huckster or PR rep—just restate the same thing over and over again to avoid answering the question at hand. I don't have any doubt that regardless of how incisive or specific a question I ask, the response will be something along the lines of "The purpose of this problem is to reduce fraud and waste while ensuring continued access to those truly in need". Great, tell me that again in case I didn't hear the first time. The reason it drives people nuts isn't because you're murdering them with their own rhetoric, it's because it's like talking to a wall.

A Moronically Detailed Explanation

Buckle up, because the answer is complicated. In the first half of the twentieth century, there was a clear delineation between performers and songwriters. There were obvious exceptions like Duke Ellington, but writing and performing were considered separate roles. When recording, record labels would pair performers up with A&R men. The primary job of the A&R man was to select material for the performer, based on the performer's strengths and what they thought would sell. The repertoire largely came from American musical theater, and popular songs would be recorded by numerous artists. "Covers", as such, weren't really a thing in those days, as the earliest recorded version often wasn't the most well-known. For example "The Song Is You" is most associated with Frank Sinatra and his time with Tommy Dorsey, but it was first recorded ten years earlier. No one, however, thinks of the Sinatra version as a "cover" of a song by Jack Denny and the Waldorf Astoria Orchestra. Another way to think about it is that if the Cleveland Symphony Orchestra releases a new recording of Beethoven's Fifth next week, it won't be described as a cover of the Berlin Philharmonic's "original" 1913 recording. It's also worth noting that the heyday of the Great American Songbook was also the period when jazz was effectively America's Popular Music, and the focus wasn't so much on songwriting as it was on individual style and interpretation.

A critical factor in all of this is royalties. Every time a song is included on an album, played in public, played on the radio, etc., the songwriter gets a flat fee that is set by the Copyright Royalty Board. For example, if you record a CD the rate per track is 12.4 cents or 2.39 cents per minute of playing time, whichever is greater. So if a songwriter has a song included on an album that sells 40,000 copies, they'd get $4,960 in royalties. With the development of the album following WWII, the industry limited albums to ten tracks to keep royalty costs down. This is why the American versions of Beatles albums are significantly different than the British versions—the UK industry allowed 14 tracks. The Beatles hated this practice (and rebelled against it with the infamous "Butcher Cover"), and by 1967 they had enough clout to ensure that the American albums would be the same as the British albums.

The more important legacy the Beatles would have on the music industry was that they mostly wrote their own material. From the beginning, rock and roll musicians like Chuck Berry and Buddy Holly had been writing their own material, but the Beatles were making the practice de rigeur. A&R men were now called producers and were beginning to take a more involved role in the recording process. Beginning in the 1950s, Frank Sinatra was turning the new album format into a concept of its own. While most albums were simply collections of songs, Frank Sinatra had the idea to select and program the songs thematically to give the albums a cohesive mood. He also made sure his fans got their money's worth, and didn't duplicate material from singles. For the most part, though, albums remained an "adult" format, with more youth-oriented acts focusing on singles. While albums did exist, they were often mishmashes of miscellaneous material. A band didn't go into the studio to record and album; they went into the studio to record, and the record label would decide how to release the resulting material. The best went to single a-sides. Albums were padded with everything else—b-sides, material that didn't quite work, and, of course, covers recorded for the sole purpose of filling out the album. These would often be of whatever hits were popular at the time, and maybe a rock version of an old classic.

The Beatles and other British acts took up the Sinatra mantle of recording cohesive albums, but other musicians, particularly in the US, didn't have that luxury. The strategy of American labels in the 1960s was to shamelessly flood the market with product to milk whatever fleeting success a band had to the fullest extent possible. My favorite example of this trend, and how it interacted with the changing trends, is the Beach Boys. They put out their first studio album in 1962, 3 in 1963, and 4 in 1964. These were mostly short and laden with filler, but by the time of All Summer Long Beatlemania had hit and they were upping their game. As rock music became more sophisticated in 1965, Brian Wilson began taking a greater interest in making good albums, but Capitol still required 3 albums from them that year. By this time they were deep into Pet Sounds and didn't have anything ready for the required Christmas release. So they got a few friends into the studio to stage a mock party and recorded an entire album of lazy covers, mostly just acoustic guitar, vocals, and some simple percussion. It's terrible; even the hit single (Barbara Ann) is probably the band's worst. Two years later ripoffs like this were going out of fashion, but, with the band in disarray and no new album forthcoming, Capitol wiped the vocals from some old hits and released it as Stack-o-Tracks, including a lyric sheet and marketing it as a sing-along record. They were truly shameless.

During this period, there was still a large contingent of professional songwriters who specifically wrote for pop artists. The Brill Building held Carole King/Gerry Goffin, Barry Mann/Cynthia Weill, Neil Diamond, and Bert Berns, among others. Motown had its own stable of songwriters to pen hits for its talent (and when they had to put together albums they covered other Motown artists' hits, Beatles songs, Broadway tunes, and whatever else was popular at the time). But times were changing. By the time Sgt. Pepper came out in 1967, rock bands were thinking of themselves as serious groups who played their own instruments, wrote their own material, and recorded albums as independent artistic statements. What criticism of rock existed was limited to industry publications like Billboard and teen magazines like Tiger Beat; the former focused on marketability and the latter on fawning adoration. Rolling Stone was launched in 1968 as an analog for what Down Beat was in jazz—a serious publication for serious criticism of music that deserved it. Major labels clung to the old paradigm for a while, but would soon yield to changing consumer taste. Even R&B, which largely remained aloof from this trend, saw people like Marvin Gaye, Stevie Wonder, and Isaac Hayes emerge as album artists in the 1970s.

This was the status quo that continued until the early 2000s. Pop music meant rock music, rock music meant albums, and albums meant cohesive, individual statements. Covers still existed throughout this period, but the underlying ethos had changed. If a serious rock band records a cover, there's a reason behind it. The decision to record the cover is an artistic one in and of itself, unlike in the 1940s, when you recorded covers because you had no other choice, or in the 1960s, when you recorded covers because the record company needed you to fill out an album. At this time, the industry itself was in a golden age, as far as making money was concerned. The introduction of the CD in the 1980s eliminated a lot of the fidelity problems inherent to analog formats. When they were first introduced, CDs were significantly more expensive to manufacture than records. But by the 1990s, the price of the disc and packaging had shrunk to pennies per unit. The cost of the disc itself was no longer a substantial part of the equation. And the increased fidelity led to increased catalog sales, as people, Baby Boomers especially, began repurchasing their old albums. These were heady times indeed.

And then Napster came along and ended the party. The industry spent the next decade flailing, going before congress, sued software developers, sued their own potential customers, and implemented bad DRM schemes, all in a vain attempt to stop the tidal wave. Music was no longer something you bought, but something you expected to get for free. After a decade of this nonsense, the industry finally did what it should have done all along and began offering access to a broad library for a reasonable monthly price. For once, it looked like there would be some degree of stability; profits went up, and piracy went down. Which brings us back to those royalties.

Earlier I gave an example where a songwriter gets paid a statutory fee for inclusion of a song on physical media. This doesn't work for streaming; if I buy a CD I pay the 12 cents but I have unlimited access to the song. Streaming works different because I technically have access to millions of songs, but the artist only gets paid for the ones I play. Paying them 12 cents a song doesn't make sense. So instead, streaming relies on a complicated formula involving percentage of streams compared with total revenue blah blah blah. The thing about royalties is, they come off the top. If a label releases an album with 12 songs by 12 different songwriters, none of whom have any relation to the label, that's $1.44 right there. But if the songwriter is also the performer under contract to the label, then the label can negotiate a lower songwriting royalty (these are called Controlled Compositions). But it gets better. Songwriting royalties don't go entirely to the songwriter, but are split between the songwriter and the publishing company. An artist signed to a label is probably required to use a publishing company owned by the label, so there's a 50% discount right there. A typical record contract grants a 25% discount on controlled compositions, so that $1.44 the label owes in royalties is down to 54 cents if the artist writes all his own songs.

In the world of streaming, where the royalties are paid every time a song gets paid, this can add up quick. There's no real downside to releasing a few covers for streaming as album tracks or as part of a miscellaneous release because the streaming totals aren't going to be that high. The problem comes when the recording becomes a hit; when there's a lot of money at stake, being able to recoup 62.5% of the songwriting royalties you'd otherwise have to pay means a lot of money. To be fair, most labels use outside songwriters to pen hits for their pop artists. But these songwriters are almost always affiliated with the publishers owned by the labels, and are thus cheaper on the whole than going out into the universe of available songs and picking one you like. The Great American Songbook existed in an entirely different world, where paying these royalties was an accepted fact of the industry. After the Rock Revolution, this was no longer the case, but the culture still existed, and the industry was making so much money that it didn't care. After 2000, extreme cost cutting became the norm, and songwriting royalties were an easy target in a world that had largely moved away from outside songwriters.

BTW I thought that "Fast Car" cover sucked. First, the song wasn't that good to begin with, and Tracy Chapman is possibly the most overrated singer-songwriter in history (aside from her two hits her material is the definition of generic). Second, a cover should try to reinvent the song in the performer's image. Here, it sounded like Luke was singing karaoke.

So what's the problem here? These people worked for the company for years at a certain salary with certain skills. You decide they need additional skills that the market is telling you loud and clear merit higher pay, but you evidently don't want to pay them. It's as simple as that. Say I'm a property manager and I oversee staff categorized as "general maintenance" who do things like cut grass, shovel snow, change light bulbs, and do small repairs. I decide they're useless because they can't do any serious electrical work so I train them all as electricians. How shocked can I be when they leave to take higher paying jobs with electrical contractors? When I took over these people were all maintenance men and were being paid like maintenance men. It would be ridiculous of me to expect to have a staff full of certified electricians making maintenance man salaries.

Edit: Reading these comments in tandem with the discussion from earlier in the week has really helped my crystalize my thoughts on the matter, which I didn't really pay attention to until recently. I had asked for some clarification of "abuse" and @SubstantialFrivolity provided me with an example, but now I'm seeing how this works in practice. It helps frame the divide more clearly: On the one had you have Trump, Musk, MTG, Vivek, and others, the common thread among them being that they have owned businesses and know firsthand how hard it can be to get employees. On the other hand, you have Laura Loomer, Steve Bannon, the MAGA base and the Fox News comment section, most of whom don't won businesses and aren't in senior management, who know how hard it can be to find a job.

The sentiments I see expressed among the MAGA faithful are that, if there is specialized work to be done, we should be providing education and training to Americans so that they can fill these desirable jobs rather than simply importing labor from elsewhere. The sentiments I'm seeing on the left are largely echoing that but with the added wrinkle where they suggest that the problem isn't so much a shortage but a shortage at the rates employers want to pay. Like OP's comment above, if an IT head doesn't like that his staff are moving on as soon as they have the training he wants for the job, rather than pay a market rate, he instead cries shortage and imports someone at a lower wage who is tied to the employer by virtue of his immigration status. He gets around the whole "prevailing wage" issue by taking advantage of the lack of granularity: The relevant job category is probably something like "IT person", not "IT person with special training on x, y and z".

I am, however, sensitive to there being actual shortages in certain occupations that we can't just wait 10 years until there are enough qualified people available. It seems that the solution here is eliminating (or greatly restricting) temporary visas and replacing them with more permanent visas. If we're talking about truly exceptional people, the sponsoring company should be willing to take on the risk that they'll jump ship. They should also be willing to pay a relatively high application fee. In other words, hiring foreign workers should be more expensive than hiring native workers. If bringing over a single foreign worker requires a $25,000 nonrefundable application fee, legal fees, and no guarantee that the sponsored immigrant won't jump ship to a company that offers him more money, you're not bringing him over unless there really is a shortage and you're confident you can keep him.

I worked as a title attorney for a decade. It's not a scam. Most of what you're paying for isn't to theoretically pay off future claims, but to pay for work done up front to prevent future claims. This requires them to send someone down to the courthouse to gather all of the title documents, which are than sent to an attorney who looks for issues and drafts a list of exceptions that the policy won't cover. If the exceptions are minor things like utility easements and the like that don't really affect the value of the property, the company will write the policy. About a third of the time, though, there are major issues that require the insurance company to do further curative work before they'll move forward. The reason such a small percentage goes toward paying out claims is because the vast, vast majority of your premium is spent on getting assurance that there won't be any claims.

Now, theoretically you could forgo the insurance and research the title on your own, but this will inevitable cost you more than just getting the damn insurance because you're now paying the full hourly rate for an attorney who may or may not have any significant experience doing title work, whereas the insurance company has an attorney on its payroll for a lot less, and this guy does nothing but titles. And they'll also be able to delegate a lot of the legwork to other staff, who also do nothing but titles. So you're paying less for a superior product. Theoretically you could also do the research yourself but I highly, highly would not recommend even thinking about even attempting this. Even having spent ten years doing titles that were much more complex than typical residential real estate transactions, there's no way in hell I wouldn't buy title insurance. I've seen too much.

Other countries (not the US) have central land registries and dispense with title insurance altogether.

The problem there is that we would have to essentially run a full title for all land going back to patent. Most title insurance companies only do a 60 year search, because claims beyond that are rare enough that occasionally having to pay one isn't a big deal. But it becomes important if you're making ironclad assurances. You could theoretically get around this by passing a marketable title that acts as an effective statute of limitations on claims, but you stil don't avoid the basic problem: It would still be really expensive. How long and how much do you think it would cost to run full title on all 585,000 parcels in Allegheny County? You're probably talking billions, when you consider that a lot of these are going to be industrial and commercial properties that have much more complex titles than a simple residential subdivision lot. Rural counties have fewer parcels, but rural work poses its own problems; those titles are almost never easy. Then there are the associated costs of curing all those titles (a buyer can always walk away), developing and implementing the system, and dealing with the inevitable lawsuits that follow. I did a lot of work in Ohio right when oil and gas was starting to take off. The state had passed a dormant mineral act that sought to simplify things: Rather than having to track down the innumerable hard-to-find heirs of someone who severed a mineral interest in 1919 and then forgot about it, any interest that hadn't seen any action within the past 20 years would merge with the surface. Seems simple enough on its face. This led to a decade of wrangling and counting, with the Ohio Supreme Court getting involved on several occasions, to determine when an interest is actually terminated. We basically had to hold off on interpreting it for a while while the cases worked their way through the courts. I doubt the wholesale termination of old surface interests would be that much different.

I remember back in 2016 I was sitting on my cousin's deck for one of his kid's first birthday parties, and my uncle posed a question to the group of whether the kid in question would ever get a driver's license. Now, he has a habit of going out on certain limbs when arguing, but he seemed utterly convinced that fifteen years hence autonomous vehicles would be so ubiquitous as to obviate the need for any driver training among normal people. I argued against the idea, but only to the extent that the regulatory landscape wouldn't change that fast—I certainly thought the technology would be there, but I doubted that regulators and insurance companies would have the stomach to turn all operations over to computers. OF course, that was around the time where everyone was talking about AVs. A guy near me trying to win the Democratic nomination for state rep was basing his entire campaign on handling the disruption that would soon wreak havoc on the trucking industry. I saw Uber's AVs on an almost daily basis near my office in Pittsburgh. CGP Grey was making videos about how full autonomy would basically solve traffic congestion, at least as long as you don't give a fuck about pedestrians.

This summer, that kid will be halfway toward qualifying for a learner's permit, and autonomous vehicles seem further away now than they did when he was one. Less than two years after that party, a woman in Arizona was killed after being hit by an Uber self-driving car. From the evidence available, it didn't look to me like the accident was avoidable, and had it involved a standard car it would have made the local news for a couple days but probably wouldn't have even resulted in charges being filed. But since it was an AV the story went national, and the public's trust had eroded. It would be easy to blame this incident for the downfall of enthusiasm over EVs, but let's face it; something like this happening was only a matter of time, and the public response was entirely predictable. So the industry plugged along, and keeps plugging along, though fewer and fewer people seem to care. Uber's out, Ford's out, Volkswagen's out, GM is under investigation, Apple seems directionless and indifferent, and a recent Washington Post article claims that Tesla cut quite a few corners in its pursuit of offering its customers something that could be marketed as progress.

Hype for AVs started picking up in earnest among the tech horny around 2012. Three years later the buzz was mainstream. All throughout this period various industry leaders kept making bold predictions about truly autonomous products being only a few years away. Okay, maybe with some caveats, like only on the highway, or in geofenced areas, or whatever, but still, you'd at least be able to get something that had some degree of real autonomy. The enthusiasm seemed justified, though, since, practically overnight, self-driving cars went from something that you'd occasionally hear about in science magazines when some university was doing basic research to something that major tech and auto companies were sinking billions of dollars into. Around the same time, regular cars started getting features like adaptive cruise control and lane keep assist that seemed like self-driving under another name, and Tesla's autopilot feature seemed like a huge leap. With the normal acceleration of technology plus the loads of money that were being dumped into any number of competing companies, it was only a matter of time. Now, ten years and 100 billion dollars later, the only products that are available to an average consumer are a few unreliable ridesharing services in cities that don't have weather.

I'm bringing this up because there are a lot of parallels between AVs and GPT-4. This is a huge, disruptive technology that relies on AI, and, while it may have some critical flaws in its current implementation, technology is constantly improving, often exponentially, as processing power increases. And while I don't have access to GPT-4 myself, I'm sure it's as impressive as everyone claims it is. The trouble is, impressing people with no skin in the game is easy. Convincing people to rely on it is a whole different animal. Most people found AVs pretty impressive when they first came out. But being impressive doesn't cut it when you're looking to replace human drivers; you actually have to be better than human drivers, or at least as good as human drivers. And human drivers are pretty damn good. In 2021 there were around 5.2 million reportable accidents in nearly 3 trillion miles driven (in PA an accident is reportable if one of the cars is inoperable or there is injury or death, though other states may vary). This means that, in any given mile of driving, one's chances of getting into an accident more serious than a fender bender is .000181%. If you drive 15,000 miles a year, you'll get into an accident about once every 30 years. If Elon Musk or whoever announced that they had developed a system that avoided accidents 99.9% of the time, that would sound impressive. But it wouldn't be; at that rate, the average driver would be getting into about 15 crashes per year! Even if it were 99.99% of the time you'd still be getting into more than a crash a year, 3 in a 2 year period. Imagine what your insurance rates would be like if you got into a crash a year.

And that doesn't even take into account all the miscellaneous bullshit that AVs do that doesn't cause accidents but nonetheless makes them untenable. They have trouble with unprotected left turns (aka most left turns), and they'll take circuitous routes to avoid them. They don't like construction, even minor construction like a lane being blocked off with cones. They get confused when, say, a landscaper has mulch bags hanging into the street a little bit. Or when driving down a narrow street with cars parked on both sides. And when this happens they just stop and call home. The people who use these ride sharing services are then forced to wait while a tech shows up to deal with the problem, traffic being disrupted in the meantime. And I won't even mention inclement weather. Making something look impressive during early testing is easy, but convincing someone to rely on it when safety, or money, or anything else that actually matters is at stake is a much harder sell, as the accuracy has to be pretty damn near close to 100% before anyone will actually trust it. And if AVs are any indication, it's really hard to get to 100%. Which is why I wouldn't be surprised if AI right now is at about the same stage AVs were in 2016. Impressive, but far from ready for prime time. Everyone keeps saying that the next iteration is going to be a game changer, and everyone is increasingly impressed, but not impressed enough to trust their business to it. And eventually it gets to the point where research is so expensive and the returns are so little that no one in their right mind would invest in it, and smaller firms go bust while larger ones scale back considerably, or at least try to direct their AI research towards applications where it might actually be used commercially. Then we're all sitting here in 2030 asking ourselves what happened to the AI revolution that seemed right around the corner. I could be wrong, but if that's the case, then hey, we should at least have some operable self-driving cars.

The Anatomy of an NFL Holdout

When a star NFL player enters the final year of his contract, it's customary for his team to negotiate a deal that will keep the player with the club long-term and usually compensate him handsomely for it. Occasionally, however, the player and the team can't come to terms. Sometimes the player will bide his time until he can become a free agent and see if the market is willing to give him the deal he thinks he deserves. But other times the player feels an emotional connection to his team and wants to stay, but on his terms, not the team's. And sometimes the team doesn't even attempt to negotiate a new contract when the player wants one. In times past, these second two scenarios would occasionally lead to a training camp holdout, when a player would stay home from team practices in an attempt to gain leverage in negotiations. Theoretically, any player unhappy with his current contract could hold out, but it was more common when players were entering the final year of a deal and wanted to renegotiate early, since that would usually result in a higher salary for the current year. The 2020 Collective Bargaining Agreement (the master deal between the league and the players union), put an end to this practice, imposing ruinous fines on players for not performing in accordance with their contracts. But there are exceptions.

Lamar Jackson was drafted by the Baltimore Ravens near the end of the first round of the 2018 NFL draft. He took over for injured starter Joe Flacco in November of his rookie season, became the youngest QB to start a playoff game, and quickly made a name for himself as one of the league's most exciting young players. In 2019 he was unanimously selected as league MVP, and in 2020 he had another great season and recorded his first playoff win. Jackson's value comes from a unique combination of arm strength and mobility. There had been mobile quarterbacks before, but for most of them their mobility was their only real strength; if they were forced to beat you with their arm, they couldn't do it. So guys like Michael Vick, Colin Kaepernick, and RGIII would give defenses fits—there's one less linebacker to blitz or drop into coverage if you have to assign one to spy the QB on every play—but these defenses soon found out that their strength could be mitigated by neutralizing the ground game and forcing them to throw. Jackson, for all his running talent, was a traditional pocket passer in college, and was as comfortable throwing the ball as he was running it. He could roll out on what appeared to be a designed run play, bait the defense in, then stop and throw a perfect over-the-shoulder fade. It was incredible.

Jackson entered 2022 on the final year of his rookie contract, and it was widely expected that the Ravens would sign him to a long-term deal. That didn't happen, but both sides seemed optimistic, and the year passed mostly uneventfully. But, as the season wound down, it became clear that trouble was afoot. The NFL is unique among sports leagues in that most contracts aren't fully guaranteed; the team can, subject to certain constraints in the CBA, cut a player without paying them. Often teams will guarantee part of the contract, and these terms get too complicated to describe here, but the amount of guaranteed money is usually the biggest sticking point in negotiations involving superstars. It was assumed that, though the Ravens were probably willing to give Lamar Jackson a ton of money, they probably weren't willing to guarantee all of it. The NFL salary cap prevents teams from just eating bad deals; it's hard to bring in good players to improve your team when one player is responsible for a huge cap hit, and doubly hard if that player is no longer a key contributor.

This unwillingness to guarantee became more salient as 2022 unfolded. The 2022 offseason was marked by two big deals. The first was the Cleveland Browns giving the Houston Texans a king's ransom for QB DeShaun Watson, and then immediately resigning Watson to a fully guaranteed $250 million deal. Watson had been the subject of lawsuits and a criminal investigation for sexual misconduct toward massage therapists. He was very good, but the Texans wanted nothing to do with him, and benched him for the entirety of the 2021 season. The legal issues went away, but the NFL still suspended him for the first 12 games of the 2022 season. So Watson entered the Browns on a monster contract but hadn't played a game in over a season and a half, and when he finally took the field in November, it showed. The jury's still out on whether Watson can shake off the rust, but the move seemed questionable when it was made and seems incredibly foolish in hindsight. The other big move was the Denver Broncos sending a similar haul to Seattle for veteran QB Russell Wilson, and similarly renegotiating his contract to one with a lot of guaranteed money. Wilson won a Super Bowl in Seattle and was one of the better players in the league for a long time, but the Seahawks struggled in 2021, and with the team entering the rebuild stage, and Wilson being their obvious best player, it made sense to move him. When he got to Denver, however, it became clear that Wilson was a large part of the reason why Seattle had been underperforming. Wilson was awful in Denver, a team that was supposedly a quarterback away from greatness, and Seattle almost made the playoffs with Geno Smith, a journeyman who was a bust with the Jets, under center. If the jury is still out on the Watson deal, most pundits agree that the Wilson deal screwed over Denver for a long time.

The Browns deal surprised no one since the Browns are notorious for being the most incompetent team in the league. But a lot of people thought that the Broncos got a fair price for a player of Wilson's caliber. Either way, the Ravens are known as one of the more competent teams in the league when it comes to personnel decisions, so it's certainly in character that they wouldn't want to commit to guaranteed cash, even for their undisputed best player. After these two fiascos, though, it makes it seem insane that any team would be willing to commit so much money, let alone the Ravens. Furthermore, Jackson has not proven himself immune to the other big weakness of mobile QBs: Injuries. Mobile quarterbacks take more hits than pocket passers, and as such tend to get injured more. Jackson sat out the last several weeks of both the 2021 and 2022 seasons injured. So it's clear that Jackson's bargaining power is greatly diminished compared to last year.

Technically, Jackson is a free agent. But the Ravens weren't willing to let him walk just yet. In an effort to keep teams from losing key players in free agency, the CBA has a Franchise Tag. What this means is that each team can select one player to tag, and the league will essentially write a one year contract for them. The Franchise Tag comes in two flavors. More common is the exclusive franchise tag. This simply says that the team keeps the player for one additional year at the average salary of the top 5 players at the position. Less common is the non-exclusive tag. This comes with a lower salary number, but allows the player to negotiate with other teams. If the player can reach a deal with another team, the original team has the option of matching the offer. If the original team decides not to match, they get two compensatory draft picks from the new team. Earlier this month, the Ravens announced their intention on giving Jackson the non-exclusive tag. This is normally a risky move, since the compensation provided is less than what the team could get than from simply trading the player. But it's genius in this case: Baltimore knows that after the Watson and Wilson fiascoes there won't be too many teams looking to sign a guy who wants a fully-guaranteed contract. And indeed, so far no other team has shown any interest. Things have become more contentious in light of recent reports that, prior to last season, Jackson turned down a five-year, $250 million deal with $133 million guaranteed. The message from the Ravens is clear: Your demands are unreasonable. We offered you a fair deal, and you won't get that deal anywhere else. If you don't believe us, we'll let you test the market and see for yourself, and if you don't like it, you can play here for another year for $30 million.

Pittsburgh: An Urban Portrait

For a while I've wanted to do a comprehensive survey of a city to examine it in terms of urbanism and the principles of what make a place a good place to live. In particular, I want to examine what makes certain places "trendy", and what causes some neighborhoods to gentrify while others stagnate or even decline. Most examinations of the urban environment are merely case-studies of a few neighborhoods that have seen change in the past several decades, for better or worse. But I think that those kinds of studies, while instructive, miss the big picture. Most cities are composed of dozens of neighborhoods, each with its own story and its own potential, and most are simply forgotten about. I've selected Pittsburgh for this exercise, for the simple reason that I live here and can talk about it as an insider rather than someone relying on news reports. You can talk statistics until the end of time, but the only way to properly evaluate a place is if you have a pulse on what the common perception of it is from those who are familiar with it. Before I get to the neighborhoods themselves, though, I want to give some preliminary information about the city so those who are unfamiliar (i.e. almost everyone here) can get the view from 10,000 feet. It also gives me the opportunity to present a few general themes that I've noticed during the months I spent researching this project. Note to mods: A lot of this survey will touch on a number of culture war items like crime, homelessness, housing, density, traffic patterns, etc. For that reason, I'm posting this in the culture war thread for now. That being said, there will be large sections where I look at nondescript parts of the city where I expect the discussion to be more anodyne, and I don't want to be hogging the bandwidth of this thread, especially in the unlikely event that I can crank out more than one of these per week. I can't really anticipate in advance what's most appropriate where, but I'd prefer to post these as stand-alone threads once I get past this initial post. If the mods have a preference for where I post these, I'll adhere to that.

I. The Setting Pittsburgh exists in a kind of no-man's land. It's technically in the Northeast, but people from New York, Philadelphia, and the like insist that it's actually more Midwestern. They may have a point; we're six hours from the nearest ocean, and the Appalachian Mountains are a significant barrier to transportation and development. No megalopolis will ever develop between Pittsburgh and Philly, and we're much closer to places like Cleveland and Columbus. We're also not assholes. That being said, nobody here thinks of themself as Midwestern. First, it's possibly the least flat major city in the US. Second, most Midwestern cities act as quasi-satellites of Chicago in the way that Pittsburgh simply doesn't. Additionally, being in the same state as Philadelphia makes us much closer politically and economically to that area than we are to places that may be closer geographically. Some people try to split the difference and say that Pittsburgh is an Appalachian city, but this isn't entirely correct, either; Pittsburgh is at the northern end of what can plausibly be called Appalachia, and is a world away from the culture of places like East Tennessee. There are close ties to West Virginia, but these are more due to proximity than anything else; for most of that state, Pittsburgh is the closest major city of any significance, which is reflected in things like sports team affiliation. And the Northern Panhandle (and associated part of Ohio) is practically an exurb of Pittsburgh, with a similar development pattern around heavy industry. But for the most part, West Virginia swings toward us rather than us swinging toward them.

The physical landscape can best be described as extremely hilly. For reference, I describe a "hill" as any eminence that rises less than about 700–1000 feet above the surrounding valley, with anything in that range or higher being a mountain. The area is built on a plateau that has been heavily dissected by erosion. Relief is low to moderate, ranging from about 200 feet in upland areas to 400 feet in the river valleys. The natural history results in an area where the hilltops are all roughly the same height, about 1200–1300 feet above sea level, while the valleys range from a low of 715 feet at the point to about 900–1000 feet at the headwaters of the streams. And there are streams everywhere. The most prominent ones are the three rivers (the Allegheny, Monongahela, and Ohio), but there are innumerable creeks that spiderweb across the landscape. The upshot is that flat land is rare around here, and traditional patterns of urban development are difficult to impossible. Most people live on hillsides since the little flat land available is often in floodplains. Roads are windy and difficult to navigate; you may miss a turn and think that if you make the next turn you'll eventually wind up where you want to be. Instead, you find yourself winding down a long hill and end up in one of three places: In view of Downtown from an angle you've never seen before, at the junction with a state highway whose number you've never heard of, or in West Virginia.

What this means for the urban environment is that neighborhoods are more distinct than they are in other cities. While flat cities have neighborhoods that blend into one another seamlessly, Pittsburgh's are often clearly delineated, with obvious boundaries. The city is defined by its topography. One advantage of this is that a lot of the land is simply too steep to be buildable, even taking into consideration that half the houses are already built on land that one would presume is too steep to be buildable. The result is a lot of green space. Another advantage is that it means you get views like this from ground level. The actual green space itself is typical of a temperate deciduous forest, but with a couple of caveats — there's plenty of red maple, sugar maple, red oak, white oak, black cherry, black walnut, and other similar species, but not as much beech as you'd see in areas further north, and not as much hickory as you'd see in areas further south. There are conifers but most of them are planted landscaping trees. White pine and eastern hemlock are native to the area, but they're much more common in the mountains to the east. I should also note that the topography means that there are some weird corners of the city that have an almost backwoods hillbilly feel.

II. The Region

I'd describe the larger region as a series of concentric rings. First is the city proper, which is small for a city of its size. While that seems like a tautology, what I mean is that the actual city limits are, well, limited, giving the city itself a proportionately low population compared to the total metro area. This is because PA state law changes in the early 20th century made it difficult for the city to annex additional territory. The result is that the boundaries were fixed relatively early in the era when America was urbanizing rapidly, and only sporadic additions were made thereafter. The next ring would be what I call the urban core. This is the area where the density and age of the housing gives what are technically suburbs a more urban feel than traditional suburbs; in many cases, these suburbs feel more urban than the later-developed parts of the city proper. These would include typical inner-ring streetcar suburbs, though Pittsburgh has fewer of these than most cities of its size. Most of the areas thus described are towns that developed as the result of industrial concerns, or suburbs of such towns. These are most prominent to the city's immediate east, and also include the innumerable river towns in the river valleys. These towns extend along the rivers for a considerable distance, but there's an area close to the city where they form an unbroken geographic mass. If not for limitations on expansion, they would likely be part of the city itself.

Next, we obviously have the true suburbs, by which I mean areas that developed after World War II but still revolve around Pittsburgh more than a regional satellite. Then we have the exurbs, which I define as areas that are developed, but more sporadically, and are often revolve around a satellite county seat rather than Pittsburgh itself. This is the area where couples looking for an extended date will get a hotel in the city for the weekend (My family makes fun of my brother for doing this because he lives in one of these areas but always insists that he's close. Nevermind that it would be ridiculous for any of us to get a hotel room in Pittsburgh if we weren't planning on getting seriously wasted.) Finally, we have the much broader greater co-prosperity sphere, which is roughly everywhere that falls within Pittsburgh's general influence, be it rooting for sports teams to being the destination when you need to go to a hospital that isn't crappy.

III. History I'll try to make this as quick as possible, since there are obviously better, more comprehensive sources for people who want more than a cursory review. The city was ostensibly founded when the British drove out the French during the French and Indian War and established Fort Pitt. The war began largely as a contest between the British and the French for control over the Ohio Valley, a vital link between the interior northeast and the Mississippi River. The site of Pittsburgh was particularly strategic, as it was at the confluence of two navigable rivers. The surrounding hills were rich in coal; combine this with the favorable river network, and the location was perfect for the nation's burgeoning iron and steel industry. This new wave of prosperity attracted waves of immigrants from Italy and Eastern Europe, who later came to define the region. A number of satellite industries developed as well, including glass (PPG), aluminum (Alcoa), chemicals (Koppers, PPG), electrical products (Westinghouse), natural gas (EQT), etc. Pittsburgh's place as an industrial powerhouse continued until the triple whammy of the energy crisis, inflation, and the Reagan recession sparked a wave of deindustrialization that turned America's Rhineland into the Rust Belt. By the '90s the region was bleeding jobs, and much of the working-age population decamped for the Sun Belt. The outright population loss has stabilized in recent years, but the region is still slowly losing population.

The odd thing about this, though, is that in 1985, at what should have been the city's nadir, it started ranking high on the "liveability" lists that were becoming popular. The city had been making a concerted effort to reduce pollution since the '50s, and by the early 2000s it had become a bit of a trendy place to live. I don't want to speculate too much on why this is, but I think there are a few factors at play. First, the crime is low for a rust belt city; there aren't too many really bad areas, and the ones that exist are small and isolated. What this means is that there is a certain freedom of movement that you don't have in other Rust Belt cities like Cleveland or Detroit with large swathes of ghetto. Even in the worst areas, the only time you might find yourself in trouble is if you visit one both at night and on foot. Even the worst areas are fine to walk around in the daytime and I wouldn't worry about driving through anywhere, which is more than I can say about friends of mine's experiences in Cleveland or Chicago. Second, the housing stock is more East Coast than Midwest. Many of the neighborhoods have architectural character, as opposed to other Rust Belt cities that are nothing but rows of nearly identical derelict frame houses (though we have plenty of those, too). Third, the housing is actually affordable. People have been bitching in recent years about significant price increases, but it's still nowhere near the level the major East Coast cities or the trendy western cities. Years ago I met a girl who moved here from New York because she wanted to live in a brick row house but it was simply unattainable where she was. She looked at Baltimore and Philadelphia, which are true row house cities, but the ones she could afford were all in the endless expanses of ghetto. In Pittsburgh, meanwhile, you could snatch a renovated nee in a good area up for well under $200k, and rehabs were being sold for under $50k. You aren't getting them for anywhere near that now, but $500k gets you a nice house in the city, and if you want to do the suburbs you pretty much have your pick of 4BR 2000 square foot homes in excellent school districts. Finally, the outdoor recreation is better than you're going to get in a city of comparable size or larger anywhere east of Denver, and the hotspots don't get the crowds that the western areas do. In the Northeast you have to drive a lot father to get anywhere, and the places are busier. In the Midwest the cities are surrounded by corn, and the areas worth visiting are few and far between. In Pittsburgh, the mountains are only about an hour away, and the general area is hilly enough and forested enough that a typical county park has better hiking than anything within driving distance of Chicago. The mountain biking and whitewater are nonpareil, and that's still a secret to most locals.

I've gone a bit off track here, but I want to make one general observation that I've noticed when studying the history of the city: Everything changes all the time, and there are no meta-narratives. The first statement may seem obvious, but when discussing urban dynamics, people often act like there was some golden era where everything was in stasis, and if we're still in that era then any change is bad and disruptive, and if we're not in that era then any change should be aiming to get back to that era. The meta-narrative is simpler: American cities developed in the 19th century, and grew rapidly during the industrialization in the late 19th and early 20th centuries. This was largely due to high immigration. Few had cars, so people needed to live close to where they worked, and public transit networks were robust. Blacks lived in segregated neighborhoods, and this was a problem. After The War, people started moving to the suburbs, a process which was hasted by not wanting to live alongside black people, who were gradually getting better access to housing. This white-flight drained cities of their economic base, and the new suburban commuters demanded better car access to the city core. Once decent neighborhoods were turning into black ghettos. The response of municipal leaders was to engage in a number of ill-advised "urban renewal" projects that which were blatant attempts to lure white people back into the city by resegregating the blacks into housing projects so they could build white elephant projects and superhighways. Then in the 90s hipsters were invented and they looked longingly at the urban lifestyle. Hip artist types moved into ghettos because they liked the old architecture, could afford the rent, and were too cool racially to be concerned about black crime. Some of them opened small businesses and white people started visiting these businesses, and the neighborhood became a cool place to live. By 2010, though, that neighborhood was expensive and all the cool spots were replaced by tony bars and chain stores and all the bohemes had to find another neighborhood. Meanwhile, the poor blacks who lived here before the hipsters showed up started complaining about being displaced from their homes, and now the same hipsters who "gentrified" the neighborhood are concerned about the effects of reinvestment on long-time residents.

This narrative probably fits somewhere, but the reality is often more complicated. One common refrain I heard from older people in the '90s was "neighborhood x used to be a nice place to live and now it's a terrible slum". Usually, the old person in question was the child of an immigrant who grew up in the neighborhood but decamped to the suburbs in the '50s. She'd return regularly to visit her parents, and watch what she saw as the decline of the neighborhood firsthand. The problem with this is that most of the neighborhoods I heard people talk about like this growing up were always slums. The only thing that changed about them was that they got blacker and don't have the business districts that they once did. Second, in Pittsburgh at least, the changing demographics in some neighborhoods is more in relative terms than in absolute ones. While some places did see an increase in the black population in the second half of the 20th century, most places did not. Before World War 2, Pittsburgh only had one truly black neighborhood, and even that was more diverse than one would expect. Blacks normally lived in racially mixed neighborhoods alongside Italians, Poles, Jews, etc. of similar economic standing. The changing demographics were oftentimes caused more by whites leaving than blacks moving in. It's also worth noting that some areas went downhill long before any of the factors cited in the meta-narrative really kicked in. People tend to be ignorant of urban dynamics in the first half of the 20th Century, which is viewed as this juggernaut of urban growth. No one considers that a neighborhood might have peaked in 1910 and gone into decline thereafter, because the meta-narrative doesn't allow for it. But in Pittsburgh, I see these sorts of things time and time again.

IV. The Housing Stock I mentioned housing stock in the last section, but I want to go into a bit more detail here because it's important when evaluating a neighborhood's potential for future growth. When Pittsburgh was first settled, most of the housing was simple frame stock. Most of this is gone, but, contrary to what one might think, the little that's left isn't particularly desirable. These houses tend to be small and in bad condition, essentially old farmhouses from when most of the current city was rural. Later in the 19th century, brick row houses were built in the neighborhoods that were relatively flat lowlands. Almost every row house neighborhood in the city is desirable, as these neighborhoods have a dense, urban feel. It should be noted, though, that through most of the 20th century this was housing for poor people, as most middle-class and above felt these were outdated.

Also from around this time is the Pittsburgh mill house. These are similar to what you'd find in most Rust Belt cities, and are proof that not all old housing has "character". These were houses built on the cheap and have often been extensively remuddled to keep them habitable. Most of these in the city aren’t exactly true mill houses, as they weren’t built by steel companies as employee housing, but most 19th and early 20th century frame houses fit the same mold. These were mostly built on hillsides and hilltops where building row houses was impractical. Not a particularly desirable style.

Combining the two is the frame row. These were built during a period in the early 20th Century when the area was experiencing a brick shortage. They aren't as desirable as brick rows but still have more cachet than mill houses, although the purpose for which they were built is similar. Most of these were remuddled at some point (by this I mean things like plaster walls torn out in favor of wood paneling and drop ceilings, window frames modified to fit different sizes, wood siding replaced with aluminum siding or Inselbric, awnings, etc.). By the 1920s and 1930s, the classic streetcar suburban style took over. These include things like foursquares and bungalows, the kind of stuff you see in old Sears catalogs. The brick shortage had ended by this period and the houses were larger and better-appointed, making them popular for middle-class areas. The remuddling on these was limited, and they’re highly desirable. After the war, more suburban styles took over, though by this point the city limits were mostly built-out so they aren’t as common as other styles. Most of the suburban stuff was built during the first decade after the war in odd parts of the city that were too isolated to have been developed earlier, though a fair deal was built in neighborhoods that were rapidly declining into ghetto in an attempt at stabilization. There’s nothing wrong with these houses in and of themselves, but they aren’t particularly desirable, as this is exactly the kind of development urbanists hate most.

There are obviously other styles, but the rest of the housing is either multi-family or infill housing that may or may not have been built with consideration given to the vibe of the existing neighborhood. The city has gotten better in recent years about building new houses to match what’s already there, but there are plenty of hideous miscues out there.

V. Neighborhood Dynamics

Pittsburgh is roughly divided into four geographic quadrants, based on the points of the compass. The East End roughly includes anything between the Monongahela and Allegheny rivers, and is where most of the trendy neighborhoods are. The North Side is anything north of the Allegheny; the neighborhoods in the flat plain along the river are mostly desirable, if less obnoxiously trendy. The South Hills are roughly everything south of the Monongahela; most of it isn’t trendy at all. The West End is everything south of the Ohio, and is beyond not trendy; it’s basically terra incognita to most Pittsburghers, as the neighborhoods are boring and obscure.

Pittsburgh officially recognizes 90 distinct neighborhoods, but the official geography isn’t entirely accurate. First, the official boundaries are based on census tracts that don’t always line up neatly with a neighborhood’s generally-accepted boundaries. Second, there are a number of bogus or semi-bogus neighborhood designations. Large neighborhoods are often split up into smaller geographic divisions (e.g. North Haverbrook, South Haverbrook, etc.) that may or may not line up with the way people actually talk. Conversely, some neighborhoods include areas that everyone treats as distinct neighborhoods but are officially unrecognized. Some neighborhoods had their names changed because the residents didn’t want to be associated with a declining part of the neighborhood; in some cases these new names caught on but other times they didn’t. For this project, I will be discussing the neighborhoods based on what makes sense to me based on having lived here all my life and knowing how people actually treat the matter. When necessary, I will use historic designations that don’t necessarily match up with the official maps, but this is rare. I will always make reference to the official designations to avoid confusion for those following along at home.

As I was examining the neighborhoods in detail in preparation for this project, a few things jumped out at me with regard to gentrification, stability, and decline. First, a gentrifying neighborhood needs a relatively intact business district. This could be nothing more than boarded up storefronts, but the physical structures need to be there; there has to be some indication that the place has potential, and it’s much easier for businesses to move in when they don’t have to build. Some depressed areas lost practically their entire business districts to blight, while others never really had a business district to begin with. This second scenario decreases the chances of gentrification even further, as there is often no logical place to even put a business district. The presence of a business district is important for two reasons. First, walkability is a huge selling point for people who want to live in a city as opposed to suburbs, and an area that’s dense but unwalkable is the worst of both worlds. Second, neglected neighborhoods don’t get “on the map”, so to speak, unless there’s something to draw in outsiders. Related to the above, there are two general kinds of businesses that can occupy a business district. The first are what I call Functional Businesses — grocery stores, dry cleaners, corner bars, banks, professional offices, hardware stores, etc. The second are Destination Businesses — restaurants, breweries, boutiques, trendy bars, specialty stores, performance venues, and other miscellaneous stuff that will actually draw people in from outside the neighborhood. There's obviously a continuum here, as, for example, a coffee shop could be either depending on how much it distinguishes itself, but you get the idea. Both are essential for a neighborhood to fully take off. There are plenty of areas with perfectly functional business districts that don’t get a second look because there’s no reason for anyone who doesn’t already live there to go there. But if a neighborhood consists exclusively of destination businesses then it will feel more like a tourist area than a real neighborhood; it’s a hard sell for someone to move to a place where they can get artisanal vinegar but not a can of baked beans. Often, the presence of a robust functional business district will stymie a neighborhood’s potential for gentrification. One thing I’ve noticed is that destination businesses rarely replace functional businesses, usually moving into abandoned storefronts or replacing other destination businesses. Functional businesses just sort of exist and don’t move out until the neighborhood has declined past the point of no return.

As I mentioned in the previous section, housing stock is another major contributor to gentrification potential. Urban pioneers have to look at a neglected neighborhood and see the potential to return to a faded glory. Houses that are worth restoring, not dumps that should have been torn down ages ago. The one exception to this is the spillover factor; if a neighborhood with bad housing stock is close to other gentrified neighborhoods that have great amenities but have become too expensive, nearby neighborhoods will get a boost from this, especially if they have intact business districts.

On the other side of the equation, decline follows displacement. The story of declining neighborhoods in Pittsburgh follows a pattern. First, in the 1950s and 1960s, civic visionaries sought to clear slums by replacing them with ambitious public works projects. Forced out of their homes, the residents of these slums needed somewhere to go, and moved to working class neighborhoods that were already in a state of instability, if not minor decline. (It should be noted that slum clearance was much rarer in Pittsburgh than in other cities, though some wounds still run deep). More recently, the city has demolished public housing projects that had become crime-ridden hellholes, but their problems only spilled out into low-rent, working class neighborhoods. What results is a game of whack-a-mole, where revitalization of one area simply leads to the decline of another. That’s why I’ve been less critical of low-income set-asides than I was in the past. I used to be totally free market on the housing issue, but it seems like an inflexible standard only ensures that poverty will remain concentrated, which does little to improve the situation of the poor. Section 8 was supposed to address this problem by getting people out of public housing hellholes and into regular neighborhoods, but it’s only worth it for slumlords in declining areas to accept the vouchers, and the result is that entire neighborhoods go Section 8. I grant that it’s better than things were previously, but I think things could be better still if we agreed that every neighborhood was going to subsidize the housing of a certain number of poor people. That way we can at least make it so the honest, hard working people don’t suffer unnecessarily, and the kids grow up in a more positive social environment. Maybe I’m being too idealistic, but it seems better than any of the existing alternatives.

Finally, a brief note on stability. Stable middle-class or working-class areas tend to be boring areas that are too far away from bad areas for any spillover or displacement to affect them. There may be some long-term factors that may lead to their eventual demise, but there are no obvious causes for concern. The flip side is that as much as some of these places have been touted as the next big thing, the same factors that keep them from going down also keep them from going up. One factor playing into this is the number of owner-occupied houses and long-term rentals. New residents, whatever their economic condition, simply can’t move into a neighborhood if there are few rentals and little turnover in ownership.

VI. The Neighborhood Grading Rubric

The initial goal for this project was to discuss what the future holds for these neighborhoods, and to discuss special considerations that factor into the whole thing (actually, it will mostly be about special considerations, at least for the big neighborhoods). One thing that’s important to this exercise is to discuss where the neighborhoods are at now. I initially developed a complex classification system, then scrapped it because it was too complicated and still didn’t explain everything. But as I got to thinking about it, I decided that some sort of grading was necessary to put things in proper perspective rather than rely on qualitative description. So I developed a much simpler rubric that should catch everything. I would note that the below isn’t to be construed as a ranked desirability ranking, although it will be made apparent that some of the categories only describe undesirable areas.

Upper Middle Class: This includes upper class as well, but truly upper class areas are rare enough to make this a distinction without a difference. These are highly desirable but may have gone past the point of trendiness to the point of blandness (though not necessarily). These include places where gentrification reached the point where it’s all chain stores, but also places that never really gentrified because they were always nice.

Gentrifying: These are the hotspots that everyone knows about. What separates them from the upper middle class areas, even if they are more expensive, is a sense of dynamism and a raffish air. Students and bohemian types still live here. There may be older working class homeowners who never left, and poor renters who haven’t been forced out yet. There may still be a few rehabs for sale at somewhat decent prices. Most of the businesses are locally owned, and it probably still has a functional business district from the old days.

Early Gentrification: This is the point where a neighborhood starts making the transition from working-class or poor to middle-class or trendy, but isn’t quite there yet. Most of the businesses are functional, but there are a few cool places for those in the know. The hipsters are starting to move in. People are buying derelict houses at rock-bottom prices and fixing them up. But the normies don’t know about it yet; tell most suburbanites you’re going to a bar there and they either think you’re going to get your wallet stolen or wonder why you want to hang around old people. The neighborhood is still rough around the edges, and may still have a decent amount of crime and a high minority population. It probably still looks rather shabby. It’s perfectly safe for those with street smarts, but it’s still sketchy enough that you wouldn’t recommend it to tourists.

Stable: Not necessarily boring, but not going anywhere. There’s probably a good functional business district, but few destination businesses. Every once in a while one of the destination businesses might become popular enough that people think the whole neighborhood is going to go off, but it never seems to happen. And that’s if it’s lucky. The upside, though, is it’s very safe, and affordable to buy here. This also includes middle-class black areas that suburban whites assume are hood but are actually rather quiet.

Early Decline: These are the neighborhoods that just don’t seem like they used to. Crime is up, property values are down, and the houses are starting to get unkempt. Most of the long-term residents are elderly, and the newer residents are transients who are of a distinctly different class than the elderly ones. They may be blacks who were displaced from nearby ghettoes, or they may be white trash. There’s increasingly conspicuous drug activity, but no gangs yet. There still may be a functional business district, but there is rarely anything destination, maybe an old neighborhood institution that is still hanging on. These are perfectly fine to rent in if you don’t mind a little excitement in your life, since they’re still relatively safe for normal people, but they aren’t places you want to commit to.

Rapid Decline: This is the point where gang activity has become a problem, and gunshots are no longer a rare occurrence. If there was a white working class here they’re now dead and gone, and if there was a black middle class they’re very old. Residential sections are starting to see blight and abandoned houses. There’s still probably a reasonably intact business district, but it’s entirely functional at this point and mostly caters to stereotypical ghetto businesses. It is, however, still well populated.

Ghetto: A neighborhood that has bottomed out; it can’t get any worse than this unless it disappears entirely, which seems almost inevitable at this point. Few intact blocks remain. If there’s any business district left it’s scattered remnants (though there’s almost always some kind of newsstand). There’s probably gang activity, but there’s little territory worth defending. The atmosphere is desolate and bleak, as the remaining residents are only here because there’s nowhere else to go. Crime, while still a problem, is probably lower here than one would think, simply because there aren’t too many people here to be criminals, and equally few available victims.

The below ones are special cases that don’t fit into the above continuum particularly well.

Deceptively Safe: These are areas that look sketchy as hell but are actually decent places to live. They are usually poor neighborhoods where the properties are in somewhat shabby condition but are occupied. Unique to Pittsburgh (probably), this also includes places that look like part of West Virginia was transported into the middle of the city. These are mostly very small micro-neighborhoods that are poor but just don’t have the population or foot traffic to support any serious crime. Buy low, sell low.

Projects: Pittsburgh has a few “project neighborhoods” that only really exist because it built most of its public housing in odd places where nobody wanted to build before. Most of these projects don’t exist anymore, so saying these are invariably bad areas is a misnomer, especially since one of the few remaining projects is a senior citizen high rise. Most of these are an odd mix of different uses that merit individual treatment.

Student Areas: Transient population, unmaintained properties, exorbitant rent for what you get, multiple unrelated people living together common, noise, public drunkenness, vandalism — everything a real ghetto has except violent crime and gang activity. This doesn’t describe all student areas, but areas where the percentage of students reaches a certain threshold have a much different dynamic than regular neighborhoods. First, these areas are relatively safe considering how dysfunctional they are in every other respect, and second, while the properties are in poor condition, there is little blight or abandonment because the slumlords know they have a captive audience. Also, the presence of a university usually means that the area sees a lot of outside visitors so more destination businesses develop, and there are plenty of places catering to students. Altogether a unique dynamic, though no one not in college would even consider living here.

That’s it for the preliminaries, stay tuned for Part I, where I discuss Downtown and the other “tourist areas” in its vicinity.

If you're looking for some kind of Golden Age where coddling such as you describe didn't exist, you're not going to find it. If you do find it, it's going to be uncomfortably recent and remarkably brief. The first prominent Twitter ban was of internet troll Charles C. Johnson in the spring of 2015. The first Twitter ban of anyone who was well-known for something other than being banned from Twitter was Milo Yiannopoulos's ban in the summer of 2016. Twitter was founded in 2006 but wasn't relevant until around 2009, so that's 6 or 7 years of virtually unmoderated Twitter. Reddit started banning its more controversial subs around 2015 as well, but it didn't start to become remotely popular until 2011ish, and didn't reach the kind of cultural prominence of Twitter until well after the censorship had been implemented. Youtube has always imposed some level of censorship (e.g. no porn), but started demonetizing videos advertisers found distasteful around 2016. YouTube is a special case, though, because while it's been popular since practically its launch in 2005, most of the early videos were all reposts of traditional media and stupid home videos, with occasional how-to content. The idea of making a living from YouTube doesn't really arise until around 2012, with the emergence of PewDiePie, and most of these people would be putting out that kind of crappy content designed for teenagers until around 2104, when the idea of producing quality, documentary-style content would start to take hold.

So we're looking at what was, at most, a 5 year period where Americans weren't being coddled, starting sometime in the very late '00s and ending in the mid-'10s, when the major social media platforms were prominent enough to have cultural relevance but were relatively uncensored and unmoderated. But what about before that? Facebook was limited to college students before 2008. Most of the others didn't exist before 2004. There were blogs, of course, but there are still blogs, and no one really moderates them anyway. They aren't as culturally important as they used to be, but that's because most of the popular ones were anodyne enough that their creators had no problem fitting into whatever restrictions the social media companies are enforcing. Before 2000 the internet was a buzzword and media curiosity, not something that was central to people's lives or replaced anything particularly relevant. It was also viewed by most people as a pointless cesspool, precisely because of it's totally unregulated nature (I remember when the content of most arbitrarily selected chatrooms was profanity-laced outbursts from teenagers). It should be mentioned that this was also a time when the most popular ISP was AOL, notorious for their "Walled Garden" approach.

Before 1995 the internet was the exclusive domain of enthusiasts and hippies who thought that the medium had the power to transform consciousness and make the world a better place. This was also a time when the internet had little to no cultural relevance. The dreams of these early adopters were shattered in the latter part of the decade when the masses came online and promptly shattered any dreams of a new utopia. In the 1990s the average American's ability to contribute to the public discourse was limited to call-in talk shows and newspaper letters to the editor, and you better believe that they had standards on what they would allow. The only place on television to see tits or hear the F-word was HBO and Cinemax. The YouTube equivalent was public access cable. There was various pearl clutching about goths, Marilyn Manson, Mortal Kombat, Law and Order, and a bare ass on NYPD Blue. Prior to NYPD Blue, even mild swearing was rarely heard on TV.

Prior to the 1980s songs were regularly banned from radio for being suggestive, or sometimes for having unintelligible lyrics that might be suggestive. Prior to the 1970s pornography was virtually impossible to come by; Playboy didn't show pubic hair until 1969 I think. George Carlin's "7 Words" bit led to FCC standards that prohibited certain material from being aired during the daytime. Of course, before this, such standards were unnecessary, because no one would even think to air such material. In the 1960s mildly vulgar comedy like Lenny Bruce was enough to get you sentenced to 4 months in a workhouse. Books like Naked Lunch were banned in some places and hard to find in others. From 1933 to 1968 Hollywood was bound by the Hayes Code following an uproar in the content of films. Nothing in pre-code Hollywood would be particularly objectionable by today's standards. Prior to the 1930s Ulysses was banned in the US. Prior to that there were Comstock laws. Prior to that was the Victorian Era, the most notoriously prudish period in Western history, where many of our most cherished euphemisms come from. And I don't know too much about the Regency period, but if you have to go back that far your argument sucks anyway.

The point I'm trying to make is that censorship doesn't happen in a vacuum. The censorship of social media was a direct response to its increased reach and popularity. Of course advertisers don't want objectionable material on YouTube; they don't want it on cable TV (which is unregulated), so it would be ridiculous to expect them to not want it elsewhere. On the whole, society is much more permissive than at any time in the past. To you it may seem like things have gotten more restrictive, but I suspect that that's because, as an admittedly always-online person, you were participating in communities that only mattered to other always-online people, which isn't most people. Once these communities became mainstream, there was pressure to sand off the rough edges to make them palatable to mainstream tastes. If you want to publish edgy content you still can, you just have to publish it in places where it will only be viewed by a small community of devotees and won't make any money.

This is an object lesson in why people who think they don't need lawyers for stuff like this generally do need lawyers (unless, of course, this guy was so bad that he had a lawyer and the lawyer couldn't do anything about it). His big mistakes were:

  1. He tried to downplay the 1983 commitment with testimony that was contrary to the medical records. My bitch ex wife gave me some pills that made me crazy but not too crazy because the doctors quickly realized I shouldn't have been there is pretty much textbook self-serving bullshit that judges hear regularly. A lawyer would have examined him so as to frame the matter as a guy who turned to drugs to deal with the stress of a bad marriage, which caused him to do regrettable things that he doesn't entirely remember.

  2. He lied to the psychiatrist who examined him about why he was there because he thought he needed to to get an appointment, and then admitted his dishonesty to the court. A lawyer would have made him an appointment with a doctor who would provide the exact kind of evaluation the court looks for in cases like this.

  3. There were statements in the file suggesting the guy was taking psych medications that he couldn't provide an explanation for other than that he wasn't taking any psych meds. He also seemed to have a more intimate knowledge of Lifestream and the doctors that practiced there than someone whose contact with the mental health system ended 40 years prior.

  4. Most people who were involuntarily committed will have had continued psychiatric treatment for some time afterward and a history of how their condition progressed. When I was at the disability bureau, if I saw an involuntary commitment on someone's record and no other psych history, I'd assume they were homeless or in some other kind of situation where they were prevented from getting treatment.

  5. We have no idea, from reading the opinion, what this guy was actually like or how he came off in court.

In other words, the judge could tell that the guy was full of shit, and since he has the burden of proof, she wasn't going to grant the expungement. Keep in mind that the court isn't going to subpoena this guy's entire medical history, so they're only relying on what he brought with him. Given that the guy doesn't come off as trustworthy and there's reason to believe he's more familiar with certain things than he's letting on, the court might have suspected that the guy wasn't providing a complete mental health record.

Um, because it didn't exist? The Dow futures index wasn't launched until 2015. The only big crash since then was the March 2020 crash, and people were definitely talking about the Dow Futures Index then, but with other things taking the spotlight, you probably weren't reading the business section. Now that the tariffs are THE story of the week, you're paying closer attention to what's being written about the markets.

Political Quick Hits

A few scattered thoughts that don't merit separate posts:

The Nancy Mace Capitol Hill bathroom saga has come to an unceremonious close. Sarah McBride issued a public statement that she came to Washington to legislate, not to wage personal battles, and that she'd abide by whatever the House wanted. Trans activists were predictably disappointed, not only wanting a more forceful response from McBride but a unified response from House Democrats, but they weren't going to get it. The only notable public statement came from AOC, who pointed out that neither Mace nor Mike Johnson could tell you how they planned on enforcing such a rule, unless they planned on posting a guard who would check the genitals of anyone who looked suspicious. She also cynically accused Mace of trying to exploit the issue to get her name in the papers. Mace responded by calling AOC dumb and her suggestion disgusting, but she didn't offer any alternative enforcement mechanism. Johnson himself sided with Mace, but only to the extent that he believed existing rules favored her interpretation, and he never said that he'd be bringing Mace's resolution to a vote.

This whole tack seems like it's part of a new strategy for the Democratic Party. Five years ago an incident like this would have resulted in mass condemnation from the entire party, including those in leadership positions. The sum total of opposition in this case came from three people, and all three seem like they were hand-selected. Two were LGBT themselves, and the only one with any national profile was AOC, easily the most liberal member with any credibility. And even then, the comments were unusually focused. All three reps managed to hit just two themes: That the suggested rules were unenforceable, and that Mace is doing this as a publicity stunt. No long jeremiads about trans rights or anything. It's almost as if they've finally become aware that the issue is a loser, and rather than engage they'd rather let the issue quietly die while letting the least vulnerable members of the party get a few potshots in.

Meanwhile, in the wake of the Gaetz withdrawal, the center of attention among Trump's controversial cabinet picks has shifted to Pete Hesgeth. In addition to falling woefully short of the traditional qualifications for Defense Secretary, Hesgeth is taking heat for sexual misconduct allegations in his past and for comments suggesting that women shouldn't serve in combat. Once again, Democrats have been unusually silent, with the exception of Senator Tammy Duckworth, whose legs were blown off in Iraq. I suspect this whole thing is part of an exercise in time biding. There is serious doubt as to whether Hesgeth will survive the confirmation process. But a sex scandal and some controversial comments won't be enough to sink his nomination on their own. The biggest knock against Hesgeth is that he's written books where he essentially says that conservatives should aim for complete victory over liberals, whom he describes as enemies of America, and suggests that it may ultimately be appropriate to use the US military in pursuit of that goal.

If Democrats bring this up now then he gets to respond on his own terms, and by the time confirmation hearings roll around the results become predictable. On the other hand, if they start hammering him about predictably dumb shit now then he spends his energy responding to predictably dumb shit that he gets predictably hammered about during confirmation hearings, only for Democrats to change tack in the middle and start asking him about all the controversial opinions in his book. I wouldn't expect him to be caught totally off guard, but he won't have had weeks to rehearse his responses. How he responds to this kind of grilling could be the difference between whether the requisite number of Republican senators vote against him or not.

One other notable figure Democrats have been eerily silent about is RFK, Jr. I suspect this is because while rank and file Democrats hate him for his dumb woo woo opinions on vaccines and other things, actual politicians realize that he's the most liberal cabinet member they're likely to get. Hell, he's probably more liberal than anyone Kamala Harris would have appointed to the post. So Democrats won't challenge him, just lob softball questions at him asking him to expound on his opinions of abortion, single payer healthcare, dangerous chemicals, and big bad pharmaceutical companies. If the guy is going to be confirmed anyway, and is likely the best you're going to get, then why not throw your support behind him in a way that makes Republican senators squirm? Worst case scenario his nomination fails due solely to opposition from the party that nominated him.

After Hesgeth, Tulsi Gabbard seems to be the nominee that the smart people seem to think has the least likelihood of being confirmed. I don't think it behooves Democrats to back her in the way it behooves them to back RFK, but her nomination presents an interesting conundrum. A large part of Trump voters supported him, at least in part, because he was perceived as an America First isolationist who wouldn't get us into any new wars and try to get us out of existing ones. Yet Tulsi is the only cabinet nominee who seems to embody that vision. Everyone else—Rubio, Walz, Hesgeth, Ratcliffe—are all traditional conservative hawks. Her presence in the cabinet would only serve to foment the same kind of dysfunction that riddled Trump's first cabinet. As a former Democrat and tepid member of the GOP, Republicans might prefer a more united front when it comes to foreign policy and sweep her aside as the Democrats did, and for the same reasons. That being said, I've always been skeptical of Trump's supposed dovishness, as I've never met a Republican who didn't want to bomb Iran at the first opportunity. But I still think it's odd that he hasn't just gone full neocon.

The categories aren't really correct. 1 and 2 don't make sense because disability is a binary, and the benefit amount is determined at the financial qualification stage. This is the preliminary stage where SSA makes sure that claimants are legally qualified and has to be completed before they'll send it to adjudication. Once we start considering medical eligibility it's a binary; you don't get more money because you're "more disabled" or whatever. The sole exception would be that there's an optimization that can be made for people who continue to work but make below the financial eligibility threshold, but that really has nothing to do with the determination office. 3 isn't really a category because I had no way of knowing whether someone was using an attorney, what kind of advice they were getting, or whether they genuinely thought they were disabled. I'd break down the claimants into the following categories:

  1. The Classic Case: The first category consists of the typical 50+ blue-collar worker (usually) who has some kind of musculoskeletal disorder (back problems being the most common) that prevents them from doing heavy labor. When I was there, these probably constituted half of our approvals. These people were genuinely hurt, but may or may not have been disabled, depending on the severity of their condition. For example (real case, though my memory isn't precise), Larry was a 55 year old black guy who worked as a welder for most of the 20 previous years, no other employment. His back problems had been developing for some time, causing him to miss work. About a year prior he had back surgery and was off work for a while recovering, and felt good doing work around the house. He tried to go back to work once he was medically cleared, but he only lasted a few weeks. He wasn't having the constant pain he was having before, but 8 hours of bending over, kneeling, and crawling around exacerbated the pain, which went away with rest. The medical records were about what one would expect from someone experiencing the symptoms he described. Easy approval.

  2. The Generally Unhealthy: People with myriad legitimate health problems that don't rise to the level of a disability. These people are usually over the age of 40, can be male or female, and have significant employment history, though mostly at the kind of jobs that don't pay particularly well. They have HBP. They have diabetes. They have fibromyalgia. They have back pain. They have a heart condition. They're obese (usually, though not always). These tend to be the most annoying cases to deal with because the application asks them to specify the conditions for which they're claiming disability for, but if (more like when) we find they have 500 other problems we have to ask if it affects their ability to work, and of course it does, so now we have to keep requesting records from doctors that take forever to receive and don't contain any usable information. The worst part is that they are all on antidepressants they got from a PCP and they've never seen a psychiatrist. So when they tell us their anxiety and depression affects their ability to work we have to schedule as psych workup, which takes forever to schedule because these people always live in rural areas with one guy who's willing to accept the low rates we pay for them to fill out significantly more paperwork than usual. Once they actually see somebody who confirms that they aren't so anxious they can't go to the grocery store without freaking out, they get denied.

  3. The Complex Cases: People who obviously can't work but only due to complicated situations that are hard to qualify under the existing criteria. People with lingering stroke recovery symptoms, people with rare auto-immune disorders, rare diabetic conditions, people who are fine most of the time but have conditions that flare up every couple months and put them in the hospital for 2 weeks, during which time they lose their jobs. It's 50/50 whether these are approved or denied upon initial determination. If they get someone who looks at the big picture, they'll be approved; if they get someone who is a stickler for the rules, they'll be denied. All supervisors would tell you to deny these people. If they appeal, they'll almost certainly be approved.

  4. The Psycho Kids: These are people under the age of 30 who have never had a job that pays above minimum wage and no education beyond high school who are claiming disability due to vague depression/anxiety. Again, they're taking medication but it's unlikely they've ever seen an actual mental health professional, and if they did it was something like they talked to a therapist once or twice. They certainly aren't receiving any regular psychiatric treatment, and no suicide attempts or hospitalizations. These are almost always rural whites. They are invariably denials.

  5. The Accidental Cases: These are similar to the first type of case except the claimant is younger and is claiming disability not based on a degenerative condition but on the inability to due his job following a traumatic injury. they are usually misinformed about the law, however, and think that they're disabled because they can't go back to their normal job, and are usually under this impression because they worked with an older guy who got it. Unfortunately for them, as long as they're capable of doing a sedentary job, they aren't disabled. One guy, who broke his back in a motorcycle accident, told me his doctor told him he wouldn't be able to be a mechanic anymore and should do something with computers. He then jokingly told me that he didn't know anything about computers. Though he didn't realize it, this was practically an admission from his doctor that he was still able to work. These cases are almost always denials.

  6. The True Psychos: These are the real psych cases, almost always SSI, usually involving younger or homeless claimants. Mental retardation, schizophrenia, anxiety/depression serious enough to end in multiple hospitalizations, severe bipolar disorder. Children usually also have serious behavioral problems at home or at school. Usually approvals, with a few weird denials mixed in due to the occasional odd circumstance.

  7. The Death Bed Cases: These are people with terminal diseases who aren't going to survive for much longer. There's an entire division that deals with nothing but these to get the approvals in faster, though in some cases there's an expedited process where they can start receiving benefits before full approval. Always approved.

  8. The One-Shot Cases: People who have one weird condition that obviously doesn't qualify them, but they apply anyway on the off chance they're approved. One woman in her early 30s applied because of heavy vaginal bleeding. This also includes people who have already retired, get some condition, and apply for SSDI to top up their pensions before they qualify for regular benefits. One guy who had a desk job with the state Auditor General's office tried applying because he started having mini-strokes after he retired, though he was hard-pressed to explain how they would have theoretically prevented him from working had he not been retired. These are denials.

I hesitate to categorize them based on genuine cases versus those that are simply trying to game the system because, with the possible exception of the psycho kids and the one-shot cases, I don't really think that anyone is consciously trying to game the system. The rest of these people are either genuinely disabled or genuinely think they're disabled. The classic cases are actually disabled but will go back to work if they can. The generally unhealthy aren't disabled but are convinced they are, the complex cases may or may not be disabled but they can be forgiven for thinking they might be, the accidental cases think they are based on a misunderstanding of the law, the true psychos are disabled but might not know it themselves, same with the death bed cases. Even the one shot cases have a lot of people think that they're disabled based on the simplistic formula of medical condition + makes my job difficult = permanent disability.

And even if some people are consciously trying to game the system, their cases are so obviously bullshit that no one trained to adjudicate them would ever consider approving them. These articles can cite various doctors and lawyers all they want, but even with coaching, nobody who is willing to "retire" because $945/month is forthcoming is intelligent enough to keep up an elaborate ruse for decades. In my career since, I've had to prepare witnesses for depositions, and while I'm not going to say that witnesses never lie in depositions, I can say that it's not because of anything theur attorneys told them to say. Properly preparing a witness for deposition takes days, unless they're a corporate representative who has testified several times before. Even for a corporate witness, it's not easy to prepare them to answer questions in a way that isn't inadvertently damaging. And these are people with college degrees and careers at the highest levels of business. Some hillbilly who barely graduated high school isn't going to be able to effectively fake disability no matter how many doctors and lawyers talk to him, because he's not going to know how to answer the questions. I'd be more worried about a legitimate claimant being denied because they gave the answers they thought the adjudicator wanted to hear than someone who gets denied because they answered honestly. Unless they have a really sophisticated understanding of how the process works, they're not going to be able to do it, and the factors are complicated enough that they're not going to get such an understanding. Even the people here, or who write psych blogs, or articles from NPR, or who are physicians treating claimants, seem to have such an understanding.

Edit - Pinging @ThomasdelVasto

I worked for the state disability bureau in 2011, and I can confirm that your theory is basically correct. There was a huge application backlog stemming from the recession, and a huge chunk of it was people in their 50s who were laid off from blue-collar jobs and claimed bad backs, shoulders, etc. from slinging sheetrock for 40 years or whatever. The reason the bulk of the beneficiaries are in their 50s is because the law makes it very difficult to qualify if you are under 50; you have to either have a condition that meets a defined listing (and the listings are for the kinds of things that if you have no one's going to question your ability to work), or to be completely incapable of doing sedentary work. If you're over 50, it's assumed you can't adjust to other work, so you can only be sent back to a job you've done in the past 20 years. In some cases, it may be determined that you can do lighter work similar to what you did before (e.g., an auto mechanic (medium duty grade) can work as a tech at a quick lube place (light duty grade)) but that's pretty rare. If you're over 50 and already have an office job you're also out of luck, since you're effectively given the same standard as an under 50.

So a lot of people who were laid off, especially from the construction industry, especially those who were close to retirement anyway, just filed for whatever injuries they had accumulated over the years and said that was the reason they stopped working. To be fair, though, a lot of these people ended up going back to work while their claims were pending, so I don't want to paint with too broad a brush. The difference between then and now is that people above 50 but below 62 were part of the largest generational cohort in US history, so there were simply more of them. In 2008 only the oldest boomers had reached 62m and the youngest were still in their 40s. By 2020, everyone born before 1958 was 62 or older, and the youngest were already in their mid 50s. This gives 6 years worth of people to make claims, with the number going down every year.

A lot of the Boomers who retired during COVID did so because they already had enough savings to retire. The ones who didn't weren't likely to be laid off either, since COVID unemployment hit the service industry mostly and didn't really affect much else. Car mechanics and pipefitters weren't getting laid off, and if they were they were the ones at the bottom of the totem pole, not the ones who had been in the union for decades. The 2020 recession was also sharp and brief, unlike the 2008 recession where the recovery seemed to drag on for years until the job market felt normal again. It wasn't until 2013 that extended unemployment relief was ended.

So yeah, now that most of these people are on regular Social Security, and there hasn't been a comparable recession to cause a flood of new applicants, and the generational cohort of people in their 50s is smaller than it was before, there's no reason to have expected claims to keep rising. The 2010s projections were hitting right as the flood of applications was already peaking and about to decline.

I think the first question should be "Who is purporting this to be authentic?" The tweet you posted is just a debunking of the photo but it makes no reference to the source. Did the person who posted the tweet take this picture herself? Did it appear in Western media? I did a reverse image search and the only places I can find this on the web are from CCP apologists using this photo as evidence of CIA or MI6 or whatever involvement, which isn't a good sign.

I can assure you that if you successfully complete the Hock and then get a girlfriend it will have nothing to do with the Hock itself. I don't know how to tell you this without hurting you deeply, but most women don't give a shit about stuff like this. I'm an advanced skier. I've not only skied some of the gnarliest in-bounds terrain in North America, but I felt completely comfortable while dropping in even when I hadn't seen it before. I couldn't tell you the last time I stared down a line trying to get myself psyched up to do it. I don't generally mention this to women I'm trying to date. Hell, I was out with a girl last weekend and while the subject of skiing came up, I only mentioned it because she asked me about my hobbies. I left it at "skiing" and didn't elaborate. And I was hoping more that she skied as well because I have a great group of ski buddies and we have a lot of fun in the winter and it would be nice to include her in something like that. If I had brought up all the gnarly shit in a desperate attempt to prove what a badass I am, at best she would have ignored it, and at worst it would have made me look like a self-aggrandizing asshole.

You're also forgetting that if it even were something that impressed women, you still have to get the date in the first place. Unless you're going out a lot already you better solve this problem before you do anything. What are you going to do, approach women at bars and tell them apropos of nothing that you went on a survivorman expedition and by the way, do you want to go out with me? Also, keep in mind that even if this does work, unless she's already well-versed in outdoor survival it's not going to make much difference what you actually do. Any girl who doesn't ski isn't going to be impressed when I tell her I ski the Pali face at A-Basin because to her that's completely meaningless. To her, even an intermediate run would look like instant death. The only girl who I could see that being a positive to is one who skis about as well as I do and is excited to have someone to share those experiences with. In other words, any girl who is going to be impressed by the Hock will probably be equally impressed by a guy who's been winter camping a couple times, unless she's also into that sort of thing.

This is akin to a trend I've noticed with promiscuous people in general. I'll call it the "promiscuity trap" and it applies equally to men and women, though women are usually more open about it. The vast majority of relationships begin with both parties in more or less the same position—they're looking for companionship and intend to get to know the other party better and treat the relationship as a going concern. This isn't to say that the occasional flings don't happen, but they're the exception and there's usually a specific reason. Sometimes the reason is benign, like you meet someone from another city while on vacation and there's chemistry but no long-term potential there. Other times, though, it's more sinister, like you just got dumped and are looking to feel good about yourself. But when most people engage in the second kind of hookup it's due to an acute emotional situation and doesn't become a habit.

People in the promiscuity trap tend to dwell in this second world all the time. They have a constant underlying self-loathing that has them seeking instant validation from a sexual partner. But since availability trumps compatibility, these relationships never last very long. And the inevitable failure only feeds into the self-loathing more. This whole process is compounded by the fact that promiscuous people tend to be around more promiscuous members of the opposite sex than average, but aren't really any less capable of developing genuine feelings for someone else. So if a promiscuous woman sleeps with a promiscuous guy and ends up liking him there's a good chance he'll only use her for sex and dump her as soon as the next opportunity presents itself, and if a non-promiscuous guy likes her there's a good chance she doesn't like him and just wanted sex. So of course the original author talks about how she fucked men over or they fucked her over.

By the time the stars align and they meet someone whom they like and who actually likes them back the cycle of self-loathing being validated and self-medicating it with sex ends, and they're left wondering how anyone could actually like them enough to genuinely want to spend time with them? I know about this because I have a friend who fits this pattern exactly, and when I read this excerpt my mind immediately jumped to her. Then I thought of how all the promiscuous people I know seem to fit the general pattern, and the whole theory coalesced. And yes, she's admitted to me that self-loathing has a lot to do with it.

Just because it came off as competent based on initial reporting doesn't mean it was competent. He committed murder in one of the most heavily surveilled parts of the country. His entire stay in New York was known and public within 48 hours of the murder, and he was caught within 4 days. The only thing competent about this murder was that he wore a mask and nondescript clothing and left the area fairly quickly. Just because he wasn't a complete moron doesn't make him a criminal mastermind.

I can't offer any empirical data either, but I think the fact that you're comparing Marvel movies to 90s action movies is the key here. The former existed back then, but they've since come to dominate the field and nearly replace the latter. Comic book movies were always targeted toward a broader audience than action movies, particularly an audience that included children and families. The idea that I wouldn't have been allowed to see a Batman movie when I was a kid because of sex and nudity would have been unthinkable in the 90s. Even big 90s blockbusters like Independence Day and Jurassic Park didn't have much, if any, sex or nudity, because they were aiming bigger than a typical Schwarzenegger action movie. Despite some efforts in Hollywood to change this (most notably Joker), movies based on comic books are always going to be viewed primarily as children's films, and there's accordingly a limit to how much sex they're going to include. You're comparing them to a totally different genre.

Part 2

With no other teams making offers, and Jackson’s relationship with the Ravens deteriorating, it’s expected that he will refuse to sign the tender offer and sit out the season. This expectation was bolstered today by Jackson posting that he had requested a trade at the beginning of this month, and subsequently had the tag slapped on him. While the 2020 CBA technically ended contract holdouts, Jackson’s situation is different because he’s not currently under contract. He’s subject to the tag rules, but the league has no basis for fining him for missing team activities.

This refusal to sign has only happened once before, and with disastrous results. In 2018, Le’veon Bell was one of the best running backs in the league. He had already played one season on the franchise tag, and, unable to come to terms on a deal, the Steelers tagged him again (it can be done two years in a row but it’s significantly more expensive). The Steelers purportedly offered him a deal that would have made him the highest-paid RB in the league, but this wasn’t enough; he was also a key component of the passing game, and thought that he deserved RB money and WR money.

Bell entered 2018 as a training camp holdout, not unexpected since this was still common. What was uncommon was that he didn’t show up for the first game either. Or the second game. It was speculated that he might come back during the bye week. He didn’t. He came back to Pittsburgh in early November, as he had until the 13th to sign his tender offer before forfeiting the season. But he never reported to the team. When the Steelers said in early 2019 that they wouldn’t tag him again, Bell had technically won.

But then came reality. Bell had lost out on roughly $14.5 million in salary from not signing the tag, and an estimated $19 million from not taking the Steelers up on their offer. When he hit the market in the Spring of 2019, he signed with the Jets, who offered him a deal worth an average of $13.1 million a year, less than the $14 million on average the Steelers had offered. And the first year of that deal paid roughly what he would have made on the franchise tag the previous year. And only some of that money was guaranteed. Little more than a year into his time with the Jets he demanded a trade, realizing he didn’t like playing for an awful team. His own production had suffered in the absence of a decent O-line. The Jets simply released him, and he signed with the Chiefs, a good team, but found himself at the bottom of the depth chart, and later made critical statements about Andy Reid.

In the span of 4 years Bell went from being the kind of player who could credibly demand becoming the highest-paid player at his position to having completely burned his bridges with 3 different teams and being out of the league entirely. The holdout was an unmitigated disaster. Like Jackson, Bell was injury-prone, which may have had something to do with the Steelers’ reluctance to give him what he wanted, but it’s hard to see them making him a better offer in any event. The one crucial difference is that Bell was hit with the exclusive tag, meaning he couldn’t negotiate for other teams and can thus be forgiven for thinking his market value was higher. There’s no excuse for Jackson; if teams are unwilling to negotiate with him now, it’s unlikely that they will after he sits out a year. And Jackson has a recent example of how that works out that Bell didn’t. Complicating matters is the fact that Jackson is acting as his own agent. Any agent worth his salt would have told him that the strategy he’s been pursuing thus far is a bad one and probably would have signed him last offseason. Any agent also would have told him that his request for a trade was inappropriate since he wasn’t under contract at the time and thus couldn’t be traded until he signs the tender offer.

The interesting thing to me about this, though, is the reaction. Most situations involving pro athletes elicit one of the following responses:

  1. Fans and media mad at players for acting unreasonably, e.g. Antonio Brown

  2. ans and media mad at team for not treating player fairly, e.g. study clauses, voided guarantees, etc.

  3. ans and media mad at league for collusion, e.g. Colin Kaepernick, every threatened lockout

Instead, there’s a sense of sad resignation. Lamar Jackson was supposed to win Super Bowls. Instead Ravens fans got one playoff win and 2 unfinished seasons due to injury. But still, Jackson is nonetheless one of the most talented and exciting WBs in the league, and is very much deserving of a nice contract. But nice wasn’t good enough, and he seems intent on throwing his career down the toilet to prove it. And he doesn’t even have the courtesy to become unlikeable. Prior to his holdout, Bell had been publicly dissing the team for years for supposed lack of respect, and during his holdout he claimed to be staying in shape but was evidently spending a lot of time at strip clubs. When he came back to Pittsburgh, his first public sighting wasn’t at the team facility but playing pickup basketball at a local LA Fitness. That may not seem like a big deal (he is exercising), but coaches hate it when players do stuff like this because they have a tendency to injure themselves.

Jackson remains appreciative of the fans, if not the team, but seems to be taking advice from friends and family rather than an agent, which is inexcusable because NFL agent fees are capped at 3%, and a high-earner like Jackson can probably get 1%. The Ravens seem determined to do everything they can to prove to Jackson that he won’t get a better deal elsewhere, although the terms of the non-exclusive cap may limit that since the teams most likely to sign Jackson are rebuilding teams that can’t afford to give up draft picks and are reluctant to put out offer sheets that the Ravens will probably match. The most logical thing would be for the 49ers to offer Trey Lance and draft picks in exchange for Jackson, but that would require Jackson signing a tender offer first, and wouldn’t give him a new deal, just a chance to play for a different team and maybe negotiate a long-term deal. It’s complicated, and who knows how it will play out.

I have some recent experience with this. I was at a meeting yesterday afternoon with a state park that an outdoor nonprofit whose board I serve on deals with extensively. Towards the end of the meeting, after all the main items of business had been discussed, the whole thing always devolves into a general Q&A/Airing of grievances with the park and the topic of bathhouses came up. The park is primarily known for whitewater and built a bathhouse several years ago following visitor complaints about bare asses in parking lots. Part of this complaint was making outfitters stage on their own property while the park would provide a bathhouse for private boaters. A friend of mine who serves on this board owns an outfitter and built a nearly identical bathhouse on his property the same year the park built theirs. The park's bathhouse cost $750,000. His cost $35,000 and required more earth work. The topic came up because the bathhouse the park built was torn down a few years ago as part of a PennDOT redevelopment project that everyone was against. Part of the project was that PennDOT would replace the bathhouse they tore down, which still hasn't happened 3 years later. The cost of the new bathhouse? $1.9 million.

The explanation that we got for this kind of discrepancy is that it's the nature of the open bid process. When using outside contractors, the specifications are strict and inflexible, prevailing wage rules apply, there are strict time constraints, etc. This is much different than one guy trying to get a building constructed who can make compromises at his discretion and hire his brother-in-law's company without raising ethics concerns. For stuff that doesn't involve a bidding process, the costs are fairly reasonable. For instance, one of the items at this meeting was that we wanted to construct a couple of information kiosks. We were really just looking for the park's permission to build and install them ourselves with perhaps some guidance into how they want them to look. At the very least we expected to have to kick in some money. Instead we were told that the park maintenance department could build them during the slow winter season with lumber that they either had on hand or could acquire cheaply from their distributor.