@Rov_Scam's banner p

Rov_Scam


				

				

				
1 follower   follows 0 users  
joined 2022 September 05 12:51:13 UTC

				

User ID: 554

Rov_Scam


				
				
				

				
1 follower   follows 0 users   joined 2022 September 05 12:51:13 UTC

					

No bio...


					

User ID: 554

It's not apocryphal, it was just exaggerated by her biographer. Tubman was widely known in abolitionist circles in the 1850s and there is documentary evidence suggesting that she was involved in the Underground Railroad. That is beyond reasonable dispute. The scope and volume of her work is where the variance is between popular accounts and the accepted historical record. Tubman was interviewed for a Boston newspaper in 1863 and described nine rescue missions between 1850 and 1860 during which she helped about 70 people escape slavery. All of these trips were to the same part of Eastern Maryland where she was born, and all were family or other people she knew. Bradford later claimed 19 trips, and a magazine article estimated that she must have rescued at least 300, and thus we end up with 300 people over 19 trips, even if Tubman herself never made such a claim. Bradford did speak to Tubman, but she admits that Tubman had no recollection of some of the trips she (Bradford) was claiming and said that instead she got the information from unidentified "friends". Her activities during the war and afterward are well-documented.

You can choose not to believe Tubman, which is your prerogative, but keep in mind that the kind of first-hand account we get from her is par for the course in history. Having read her accounts, there's no reason to believe they are any more or less reliable than any other documentary evidence we have from the period. Certainly, corroboration of details would be desirable, but keep in mind that she was engaging in secret activity that had dire consequences if discovered. If we aren't willing to believe firsthand accounts without corroboration, then our evidence that the Underground Railroad existed at all is based on a rather shaky foundation. And this has implications for a lot of other things as well. We don't torch entire fields of history just because we're skeptical that people won't lie.

A big part of the issue with Tubman is that professional historians didn't really start taking African American History seriously until the 1970s, which was coincidentally around the same time that popular "revisionist" history started making inroads. Tubman is an interesting figure because her contributions to American history aren't unique, but her status is because she's identifiable. She's representative of a group of anonymous people who did similar things but didn't get the same profile. The upshot is that she didn't attract the same interest from historians looking to examine her life in detail. While social history, also of increased prominence since the 1970s, does look at people who aren't "great figures", it also consciously avoids trying to create them. For instance, a social history of the Underground Railroad would gather recollections from as many people as practicable and avoid placing emphasis on any one individual.

It wasn't until the early 1990s that the idea of examining the American mythos itself became the subject of serious discussion. Mystic Chords of Memory looked at how historical myth is created and how it changes over time. James Loewen isn't a historian and his work is controversial, but Lies My Teacher Told Me was a popular success and thus drew attention to the idea of heroification and raised general awareness that history isn't the pat story you got from high school textbooks. It still took another ten years before historians started looking at Tubman, and by then the process of making her into a heroic figure was complete, her life story filled with the kind of anecdotal detail that historians find suspect.

The consensus that emerged in the 2000s was basically that ther broad arc of her story is true but that some of the details have largely been either exaggerated or fabricated. She was a well-known and respected conductor on the Underground Railroad, but the number of people she helped escape was not in the hundreds but was more like 70. She did work as a nurse and spy during the Civil War. She had some kind of relationship with John Brown; she was prominent enough among the abolitionist community that she is mentioned in his writings. Bradford heavily relied on interviews with Tubman, but she also wrote to contemporary figures Tubman had mentioned for verification, and these letters survive.

From a politics perspective, any Republicans who cross the aisle to vote for Jeffries are 100% getting primaried, so that won't happen.

It's unlikely, but getting primaried isn't as much of a concern as some make it out to be. Primary threats only work for safe seats. If the district is competitive, sure, the Republicans can try a primary challenge, but an extreme partisan is dead in the water in the general.

Is this really a good thing for conservatives, though? For years I've heard them complain both about the length of bills and the power of the administrative state. The trouble is that if you insist on a shorter bill that does essentially the same thing as a longer one, what you're really doing is eliminating detail. If you're sticking to, say, Herman Cain's 9-page limit, what you're really doing is delegating to an agency with rulemaking authority.

Anyway, according to CNN as of 5 minutes ago, it looks like this new bill is dead. House conservatives balked at Trump's 2-year suspension of the debt ceiling, and there's nothing in it to entice Democrats. What we're seeing here is a repeat of the old divisions that made it impossible for the Republicans to elect a speaker last time around, and Massie has already said he's not voting for Johnson next year, so we might see a repeat of the McCarthy fiasco in the near future. Trump can take his victory lap, but it looks like the infighting that's dogged Republicans for a while isn't going anywhere. It's not inconceivable that the Democrats could tap one or two swing-district Republicans to vote for Jeffries in the name of ending the circus and getting down to business and deliver Trump an embarrassing defeat before he even takes office (it doesn't help that he raided the House for some of his appointments).

I think it's the portal more than NIL, at least for teams that don't have a ton of money available. You figure every 17 year-old kid who's being heavily recruited thinks he can just walk into Ohio State or LSU or Clemson and prove that he's worthy of being in the starting lineup and on the fast track to the NFL, and the recruiters do little to disabuse them of this notion ("Well, it's competitive but if you work hard..."). Instead, they find themselves buried on the depth chart behind two other guys, and when they think they'll get a chance at a promotion some other guy comes in to take their place. They only get playing time during blowouts, and the coaches aren't paying them much attention. Then they put their name in the portal and Iowa State or Georgia Tech or whoever comes calling and now they're suddenly in demand at a school that's not as big but big enough to get them national exposure if they're good. Or they're tearing it up at Kent State and have a chance of moving up in the world by transferring to Purdue. Schools that can't recruit as well can still compete by getting the big schools' castoffs and the small schools' surprises.

As NIL and soon, direct payments, become involved, I think this calculus changes. Pitt already saw Jordan Addison leave because USC simply offered him more money than any Pitt booster could match, especially in the early days of NIL payments. I was listening to a discussion on the radio yesterday about how colleges shouldn't even waste money on recruiting when you can just buy a team or get it through the transfer portal. The argument was that you can talk about the history and facilities and campus environment all you want, but they're always going to go to the school that can write the largest check. And if you can't afford to write that check, then sit back and wait for the inevitable transfer.

One thing I would add to my above comment is that while the article points out that the ruling isn't final, in reality I wouldn't expect the court to reverse it. One of the requirements for obtaining a preliminary injunction is demonstrating a high probability of success on the merits, and while the arguments probably weren't briefed as fully as the will be later, an appellate court granting an injunction is a strong indication on which way they're moving. This is in line with the general trend that has seen courts striking down any restrictions relating to NIL payment. One article I read suggested that the NCAA needed to settle all of the lawsuits before they were made completely powerless, but this only delays the inevitable. In a sport where there's no collective bargaining and athletes cycle through every few years, any settlement is only going to apply to a limited number of people.

There are some proposals out there by academic types who claim that the problem would be solved if only courts would rule that student athletes were employees, or if the NLRB would institute rules allowing them to collectively bargain. This is a pipe dream. First, the NLRB can make all the rules it wants, but there's currently no incentive for the student athletes to collectively bargain. Even if we limit the unions to single sports, we're talking about thousands of athletes, none of whom are staying more than a few years, so organization is a problem straight away. Classification as employees doesn't solve this problem but creates more, in that now they have to be paid minimum wage (which wouldn't be that expensive under the current system) and abide by all the other HR bullshit that workplaces have to abide by. Getting back to the incentive problem, though, even if you could bargain, why would you? Collective bargaining units are usually formed when employees have grievances with their employers that can only be addressed by power in numbers. What grievances do student athletes have? There were student athletes in the Northwestern case who wanted to form a union, but that was before the NIL ruling. We're in a situation now where athletes can sell their services to the highest bidder on an annual basis, and courts are hesitant to uphold any restrictions. When commentators say that the mess can be solved by collective bargaining, what they really mean is that it would be easier for the schools to impose restrictions if there were a union to negotiate with. But who is going to form a union for the purpose of allowing the boss to implement more restrictions?

The only way I can see this ever being addressed is if the NCAA were to eliminate the student athlete protections from its bylaws. The courts seem intent on eliminating anything else that isn't related to scheduling or rules or officials, so what do they have to lose? The first thing I would drop is the limits on practice time for football and basketball. They're currently limited to 20 hours per week in season and 8 in the offseason, but if they're getting paid like employees they can work like employees. Schools that want to win will start implementing more intense practice schedules, and the athletes won't be able to do anything about it. If they flunk all their classes, well, that's a fringe benefit; if you're here to get an education, you can pay tuition. Stop coddling them with tutors and lounges and multi-million dollar locker rooms (seriously, the difference between Pitt's locker room and the Steelers locker room in the same building is astounding).

Teams that were serious about winning would accordingly practice more, and with the money involved, some would want their teams at the practice facility the 10–12 hours per day that NFL teams expect. There will obviously be some kids who are dead serious about their careers and will want to spend as much time on the game as possible. But few 18-year-olds with no shot at the NFL want that kind of commitment, especially if they're still ostensibly there to get an education. Taking online classes in a cubicle adjacent to the locker room in between workouts probably wasn't what they had in mind. Not having a social life during the season because you have to get up at 6 am for practice every day except Sunday and Friday probably wasn't what they had in mind. The schedule of the average NFL player doesn't have much appeal to someone who isn't playing football for a living. But any program that adopts such practices will probably have an advantage, and in a market where more wins equals more money, few schools will be content to be left behind.

One possible counterargument to this theory is that some schools will adopt less demanding schedules and use that as a selling point to recruits. But I don't see it happening that way. Such a school would attract lazy players, and, combined with the built-in lesser amount of practice, would make it hard for these school to be competitive with the tougher ones. With so many schools looking for a piece of the pie, and so many roster spots to fill on college teams, it's now a race to the bottom to see who can work the kids to the point of diminishing returns. At this point, the only way out for the athletes is to make concessions about payments and transfer rules, and you need a bargaining unit for that. I doubt this would actually happen, but it's the only way I see collective bargaining entering college football.

I'm glad this is in the news again because it dovetails nicely with something semi-related: James Franklin's complaining about how the transfer portal closes in the middle of the playoffs, meaning his good backups are all out. There was some discussion of this on local sports talk radio where everyone seemed to be in agreement that it was ridiculous that the portal closed on December 28 and shouldn't even open until after the championship game. There was a brief mention that it might be some kind of transfer credit thing that keeps them from moving the dates back, but this was quickly dismissed since everyone seems to understand that the idea of these kids being students is a myth anyway.

But I don't think they really gave the issue proper treatment. The National Championship game is on January 20. The portal, as it is now, is open for almost 3 weeks, so if it opened on January 21 it wouldn't close until sometime around February 7. In the middle of the spring semester. If schools want to maintain the ever-fading illusion that these are student-athletes at all, they can't start accepting mid-semester transfers purely for athletic reasons. If they do, they open themselves up to further lawsuits challenging the entire idea of academic eligibility, or even that a player has to actually be enrolled in the school. After all, if you're regularly allowing athletes to drop out of classes a few weeks in before transferring just in time to be hopelessly behind any classes they can manage to get into (people out of college a while tend to forget how quickly classes fill up), it's going to be hard for the NCAA to make the argument that they even pretend to care about academics.

One possible solution is to delay the effective date of the transfer until the summer semester. This creates an additional problem, though, in that the player wouldn't be able to participate in spring practices, and any coach looking at transfers would like to know what he has as quickly as possible and get the new guy integrated with the team. It's hard to imagine that this policy would lead to any less bitching on the part of people like Franklin than the current system.

Here's where I think this all ties into Klosterman's point: None of this has affected fandom because most of the concerns are academic for the time being. We can bitch about players being paid or entering the transfer portal, but it hasn't really affected the on-field product that much. Middle and lower tier schools aren't able to pay big NIL money like the big schools, but they weren't able to recruit like the big schools, either, and the effect of the portal so far seems to be a wash. If nothing else, I don't see Colorado or Indiana or even Pitt (despite the massive choke job) having the seasons they've had without the recent rule changes. Coach Prime might be a doofus who gets criticized for his way of doing business, but that program was circling the drain before he came, and he single-handedly revived it.

The problems will start to creep in when the economics of the game start to have an adverse impact on the top schools. Take Penn State, for instance. In a normal year, if Drew Allar got injured and they lost a playoff game because of it, it would suck for them but be an accepted part of the game. If Allar gets injured this year and they're forced to start a freshman who has never played in an NCAA game, it will be a disaster. The consensus among Penn State fans will be that the normal backup would have at least given them a shot, while losing him on short notice completely wrecked their season. As a Pitt fan, I would absolutely love to see this happen if only for the number of central Pennsylvanian heads that would explode.

The starters aren't immune to this either. In recent years, there's been a trend of NFL-bound players sitting out bowl games to avoid injury. After Matt Corral got injured in his bowl game, it became common wisdom among commentators that sitting out is the smart move and it's just not worth it to play. How long before this logic starts creeping into the playoff? If you're going to the NFL the next season, the downside of playing greatly outweighs the upside, especially if you've already indicated that you're only chasing money. One or two games might not be a tough sell, but three or four? What happens when starters start hitting the portal before the playoffs when offered more money? Fans begged for a playoff for years and now they've got one. If the whole thing ends up determining not who has the better team but who has the most seniors not heading to the NFL, or the fewest guys entering the portal, or any number of ancillary factors, then it will turn into a farce that even hardcore fans will find hard to accept.

Can I ask how you put a 7 year old to bed when the sun is still up?

You tell him to go to bed, just as my dad told me to go to bed when I was seven and had to go to bed at 8:30 in the spring. I don't know when or why putting school age kids to bed became an hour-long ordeal for the parents.

Because it's just a clunky way of achieving the same end.

I just finished The NHL: 100 Years of On-Ice Action and Boardroom Battles, by D'Arcy Jenish. Last year I found a copy of David Harris's The League: The Rise and Decline of the NFL, and was captivated by it. It's a history of the power struggle among NFL ownership that culminated with Al Davis moving the Raiders to Los Angeles and Pete Rozelle's authority as league commissioner severely challenged. But it's also a history of ownership and the business side of the league from roughly 1974 to 1982, with the first section covering the "status quo ante" as it had developed since 1960 and a final postscript covering the three years between the immediate aftermath of the move and the time the book went to press. It's a remarkable story, covering the entire history in great detail over its 640 or so pages.

I was looking for something in the same vein and The NHL seemed like it had promise. As a much shorter book (fewer than 400 pages) covering a much longer time period (1917 to 2011), I wasn't expecting the same level of detail. And boy, I did not get the same level of detail. I wasn't really expecting it for the early years of the league, as the author admits that the source material is thin, all the major figures are dead, and the NHL wouldn't give him access to what they had. So when the book seemed to be breezing through the Calder era and including a lot of padding, I sort of nodded along, figuring that by the time we got to, say, the 1960s and the expansion era things would start to pick up a bit. They did, but things were still moving at a pretty good clip, and without records or living witnesses, the task probably wasn't made much easier.

It's once we get to the John Ziegler era that the disappointment started to set in, since he interviewed Ziegler for the book. It seems as though once Ziegler put out all the fires Clarence Campbell left in his wake, very little happened for another decade. Once we get to the Bettman era, though, it takes even more of a nosedive; these are the years I remember paying attention to the league, and while he does a decent job of pointing out all the high points (expansion, lockouts, franchise relocation, etc.), there's not much here that someone buying a book on the subject doesn't already know.

Take the 1994 lockout, for instance. It was the first major work stoppage in league history, it lasted 104 days, and 468 games were lost. This merits fewer than four pages. It is immediately followed by discussion of the Nordiques' relocation to Colorado, which doesn't even get one full page. Major stories of the 1990s, such as John Spano buying the Islanders despite having no money and the Penguins' 1998 bankruptcy (which resulted in Lemieux taking ownership of the team) are not discussed at all. I understand that you can't include everything due to space considerations, but when he spends three pages talking about the on-ice exploits of the 1980s Oilers, and elsewhere discusses the dynamics of various playoff series, it seems disconcerting in a book ostensibly about the business side of the game.

And it gets even worse from there. Once we get past the 04–05 lockout, the final chapter is dedicated to what are evidently magazine articles copied and pasted into the book. There's a section where he discusses the state of the league circa 2012 that centers around an interview with Gary Bettman. This is followed by a detailed description of the War Room in Toronto and a discussion of what's available on the NHL website. Even in the early parts of the book, he leaves threads hanging. For instance, he talks about how competitive balance problems in the early 1950s led the league to institute a draft, but since the good teams wanted to protect their farm systems it was compromised so the losers didn't have access to the really good prospects. As Chekov said, though, if you introduce a gun in the first act, you'd better fire it in the third — the draft is never mentioned again. Obviously, at some point the draft evolved into what it is today where every team has its pick of junior players, but I have no idea how this actually came to be since Jenish forgets about it. This is especially maddening when he's talking about the 70s expansion teams trading draft picks or building through draft picks and I'm left wondering what the system even is at this point. There's stuff like this throughout the book. He also makes one critical omission; when we get to Clarence Campbell's retirement, he chalks it up to his advanced age and inability to keep up with the crises the league was facing. What he doesn't mention is that Campbell announced he was stepping down shortly after he discovered he was under investigation for bribing a senator.

All in all, it's not a bad book by any means, especially if you're just looking for a breezy capsule history of the business end of the NHL, but I'm not sure who it is for. Anyone reading this book already knows 75% of everything that's covered after 1992. Anyone who doesn't probably isn't interested in a book about the business end of pro hockey. Once I read a book on a subject I'm usually ready to move on to something else unrelated, but I just started The Instigator: How Gary Bettman Remade the NHL and Changed the Game Forever because The NHL left me so unsatisfied. It seems promising, but at only 276 pages, I'm not expecting miracles.

This comes up every year around clock change time and perma-DST people and noon is noon people are equally moronic. The mere existence of this debate is proof that time changes are needed. Seriously, if you can't handle two time changes a year maximally coordinated to minimize inconvenience, then you should never be allowed to get on an airplane again in your life. Or stay up past your bedtime. Or sleep in. Or do anything else that results in any mild disruption to your precious sleep schedule.

Losing an hour of sleep on a weekend is something I can deal with once a year. But as a white-collar worker who gets up at normal o'clock, waking up in the dark is something I do not want to deal with on a regular basis, as it is noticeable harder to get going in the morning when it's still dark. I currently have to deal with this maybe a few weeks out of the year. Permanent DST would have me deal with it from the end of October until mid-March, and I really don't want to fucking deal with that. Conversely, if we eliminated DST altogether it would mean I'd forfeit the glorious hour between 8 and 9 in the summertime when it's warm and still light enough to do things outside in exchange for... it getting light a 4 am. To those early birds who think that it getting light a 4 is just as good as it staying light until 9, you either do not have a job, a family, or other real-world obligations. The average person isn't getting up at 3:30 am to sneak a round of golf in before heading to the office. For those of us who don't get out of work until 5 pm or later, that extra hour in the evening is a godsend.

So can we stop this perpetual bitching? Time changes were implemented for a reason, and people who think we'd be better off without them have never actually lived in a world without them. The benefits are all theoretical. When permanent DST was implemented during the 1970s, the program was cancelled within a year because people couldn't abide the first winter. And very few people want to end summer evenings early. This has to be the stupidest debate in American political discourse; just leave things where they are.

To understand Kinkade you have to understand how the art world actually works in terms of tastemaking. In today's visual age, where images are easily reproduced in books and magazines, on television, on the internet, and everywhere else it's possible to reproduce images, we tend to forget that the kind of familiarity we have with art is a new phenomenon. For most of human history, the only way you knew what a painting looked like was if you actually saw it in person. And even that is an easier proposition than it once was, since public museums that hold the great works are a relatively recent phenomenon. In our world, it's easy to ignore art precisely because we're bombarded with it, whether we like it or not. Yet it is he who pays the piper who calls the tune. Every man is entitled to his opinion, but unless you're actually a bona fide art consumer your opinion doesn't count for anything.

To be a bona fide art consumer, you have to be the kind of person who is willing to peruse galleries in your area with the intention of dropping hundreds or even thousands of dollars on a painting, not because it will make a good investment, but purely because you like it. The gallery is an essential part of the system. I have a friend who has the rare distinction of being an art history major who actually works in her field. She worked on the staff of the Andy Warhol Museum and owned a gallery in Pittsburgh for a few years before moving to Texas (and managing a gallery there). The gallery is an essential middleman. With art schools graduating thousands every year, and many more than that selling paintings, it's hard for someone looking to buy art who's not fully ensconced within the art world to know where to start. The gallery owner thus acts as an intermediary, able to identify pieces of sufficient value that she can recommend them to customers without hesitation, yet also in touch with economics and the taste of the customer base that she won't alienate them.

It's worth pointing out that there's no barrier to entering the world of an art consumer other than money and the willingness to use it. The whole concept of a gallery opening is to generate buzz that gets people in off the street. They're essentially parties with free booze and light appetizers, and the people throwing them don't care whether you're actually interested in buying anything or have any pull in the art world (though you should dress appropriately and be willing to mingle with the crowd). I tried to attend as many of my friend's openings as I could, and she was always appreciative, as a full house with no buyers is always better than a sparse turnout. Anyway, this is the way the system is. If you're an artist, you try to get noticed by a gallery owner who agrees to display your work and hopefully sell it. If you make enough sales, you'll get a one-man show, have your work displayed in better galleries, get overseas exposure, and eventually reach the rarefied air of having your work sell on Southeby's for tens of thousands of dollars.

There are some artists, though, who can't cut it in this system. Most artists, in fact. Most of them just keep their day jobs and do art on the side and make an occasional sale; nothing wrong with that. But some of them want to get in so desperately that they open their own galleries. These are called "vanity galleries" and are frowned upon. An artist selling his own work through his own gallery is a tacit admission that you're trying to bypass a world where you couldn't make it by buying your way in. From an economic perspective, Thomas Kinkade's work didn't appeal to bona fide art consumers who bought paintings through galleries. It did, however, appeal to the kind of unsophisticated consumer who was willing to pay 40 bucks for a print and didn't even care if the nameplate artist actually did the underlying painting. Kinkade took the vanity gallery to its logical conclusion by opening a chain of stores where you could buy reproductions of his work in between buying jeans and grabbing an Orange Julius.

Buying real art is an intimate act. You attend a gallery opening where you peruse what's available and probably talk to the artist. If you're interested in buying something you call to make an appointment to conduct business during the week. You get an original work that nobody else will have, that the artist put hours into. And you pay a price that demonstrates your appreciation for those efforts. Kinkade reduced it to a commodity that was as disposable as any other. Of course, some respected artists thought that art should be a commodity, most notably Andy Warhol. This would at first seem to absolve Kinkade, but two things need to be taken into consideration. The first is that Warhol only gets respect for this revelation because it was novel at the time. Other pop artists existed before him, but he was the first to take the ball and run with it, while still straddling the line of whether he was serious or not. Some thought his work was criticism of consumer culture; he insisted that he was dead serious that it was not, but his aloof public persona suggested a hint of irony.

Which leads into the second point about Warhol. By the 1980s it was clear that he indeed was serious, and his stature started to fade. The endless screen prints and commissioned portraits of celebrities may have caused his image to soar among the public, but he fell off with critics. Furthermore, a new generation of artists raised on Warhol took his beliefs seriously and began equating garishness with quality. He died unexpectedly after gall bladder surgery in 1987 which was bad for him but good for his image, as he couldn't spend the next twenty years sullying it even further. While the pop art of the 1980s was mass-produced and kitschy, it was at least popular kitsch. Art may be fashion, but fashion is at least contemporary. Kinkade was just as kitschy, but he didn't even try to be cool. He produced art for the kind of people who collect Precious Moments figurines. And as he got older and more famous his strategy became even more crass. If one goes to his website today, the entire first page is licensed work. If his work wasn't kitschy enough already, you can always add a few Disney characters. What makes this especially egregious is that some of the characters, like Moana, didn't exist until after Kinkade's death, further emphasizing the fact that none of his alleged work has anything to do with him personally.

Years ago, before his popular revival, I told my gallery-owning friend that I wanted to write a critical defense of Bob Ross. When I was in high school, art teachers hated Bob Ross, so I thought I was being edgy. She told me that Ross wasn't controversial and that if I really wanted to ruffle some feathers I should defend Thomas Kinkade. I knew little of his work, but, having since looked... I just can't. It's not even good in a technical sense since he obviously doesn't understand color theory. Everything looks garish. There is no sense of proportion. Robert Hughes of Time magazine was highly critical of contemporary art in the wake of Warhol, and he complained that everything seemed designed to make the biggest immediate impact but had no staying power. Kinkade is no exception; his paintings hit you like a dish where you just threw in a dash of every spice in your cupboard. And this is all in pursuit of nothing more than cloying sentimentality. His works don't have anything to say about life, liberty, and the pursuit of happiness. At least Norman Rockwell led one to consider the meaning of the American Dream, and Warhol sparked discussion of consumer culture and celebrity. But what does Kinkade do? Are his paintings meditations on false nostalgia? Maybe, but I doubt he would have agreed. Gallery owners recognized the vapidity of his work, so he had no credibility. He had commercial success but it was due more to marketing than craftsmanship. One can argue that millions of people find his work visually appealing, but millions more find pornography visually appealing. I'm not trying to argue that Kinkade isn't art, but I'm not trying to argue that pornography isn't, either.

The anti-arbitration memes have given the practice an unfair rap, which has in a perverse way contributed to a self-fulfilling prophecy that was ultimately bad for consumers. Arbitration clauses were added to consumer contracts primarily as a means of preventing class-actions, not as a cynical way to rig the outcomes. We can argue over whether limiting class actions is all that noble a goal, but I can assure you that they aren't initiated by aggrieved consumers but by lawyers who figured out that if consumers were being biked out of 50 cents worth of Cheerios for every box sold, 1/3 or the total payout will be boku bucks. So they file a class action representing anyone who bought Cheerios during the year that their scales were defective and millions of consumers get dollar-off coupons while the lawyers take home a third of the total settlement value.

Anyway, studies came out that showed consumers lost a disproportionate percentage of arbitration cases as compared with regular court cases and people concluded that this must be because the companies choose arbitrators they know will rule in their favor and who have financial interest in not biting the hand that feeds, and since proceedings are secret they don't even have to face public scrutiny. This was a convenient explanation, but someone looked harder at the numbers and found that the study showing arbitration was a raw deal was flawed. It included all cases heard under consumer arbitration clauses, not just consumer-initiated ones. And the bulk of these cases were debt collection claims filed by credit card companies against people who didn't pay their bills. In other words, the numbers were skewered by claims that were vastly different than what one thinks of in terms of "consumer claims", and that would have had the same result in a regular court.

Actually, they would have had a worse result in regular court. In almost all of these cases the debtor has no real defense so they don't bother to fight the charges. In regular court this results in a default judgment. In arbitration, however, the arbitrators actually made the credit card companies prove their case. And they found that arbitrators rarely awarded debtors the full amount. So even in cases that would normally seem hopeless, arbitration was better for the consumer. And it was better for the consumer in other cases as well. I forget the exact numbers, but assuming that the odds of a satisfactory outcome are 50% in normal court, they were like 58% in arbitration. Not a slam dunk, but not exactly strong evidence that the deck is always stacked against the little guy. Nonetheless, companies started including arbitration clauses to guard against class actions. Eventually they became boilerplate, even in contracts that had little exposure to class action. People like Ralph Nader took notice and published studies saying that this was bad for the consumer. Consumers responded by assuming that arbitration claims were unwinnable, and stopped filing them. Companies started including more of them because they became a surefire way of preventing claims. That all of this was bullshit was lost.

It is thus that I present my own personal experience with arbitration, to show you how the process goes. In the winter of 2022 I was driving from Pittsburgh to Colorado to ski, and my right rear wheel started making noise around Kansas City. Suspecting that this may be a bad bearing, and having a long drive to get home, I decided to have it looked at in Denver. I used a shop my cousin's husband recommended (though I found out later that he only named it because it was close to his house). I explained the situation and that I needed it done that day and they quoted my $2400. Not having much of a choice, I agreed to having the work done. When I was driving back I called Subaru in Pittsburgh to get a quote for the work. $1200. From the dealer, with genuine Subaru parts. Needless to say, I felt ripped off.

But what to do? I had agreed to the price. But upon looking at my bill, I was only given a total without an itemized breakdown. Subaru had given me more information over the phone, without my even having to ask for it. So I began looking for something to use as leverage. According to the Colorado Auto Repair Code, the shop had committed several violations, for each of which I was entitled to statutory damages of $500. The most obvious one was that they didn't provide a breakdown of parts and labor costs. When I got home I called them, knowing it was futile. I told them that Subaru in Pittsburgh quoted me the job at half the price. They said things were more expensive in Denver. I asked them if they'd match a quote from Subaru in Denver. They said no. I asked them for the itemized breakdown. The labor costs were actually reasonable, but, for aftermarket parts, they charged me more than double the list price of the OEM equivalent and more than triple the list price of the parts they actually used. This markup over OEM plus the diagnostic fee (which is usually waived if you have the work done) was responsible for the difference between their price and the Subaru price. I explained the code violations. He said that every garage he ever worked at did it that way. I told him that the law is pretty clear and that they're in violation. He told me that if I was threatening legal action he had to end the call.

So I looked at my bill and immediately found the arbitration provision. You have to inform them in writing and wait 30 days before filing a claim. I sent a certified letter explaining the situation. I received no response. 45 days later I opened a claim with JAMS, and arbitration association. The advantage with JAMS over the American Arbitration Association is JAMS requires an in-person hearing in the consumer's county. Within a week, I got a call from the owner, who was very apologetic. I think the reality hit him that it was going to cost him somewhere in the neighborhood of 5 grand to defend this arbitration claim (in consumer arbitration, the consumer is only resoponsible for the initial fee). He offered me $500 plus a waiver of the diagnostic fee. I told him I wanted the difference between what Subaru quoted me and his price, plus $250 to cover my filing fee. He bristled at having to cover the filing fee, and I told him I had tried to resolve the issue with the service manager weeks ago and it could have ended there. In the meantime, it could end here, or we could take it to the arbitrator. Another weeks delay and he'd have to pay $1500 for his side of the initiation fee, which is about what I was asking. He agreed to charge back the amount I was asking for.

If it hadn't been for that arbitration clause I'd have had to go to Colorado and file suit in small claims court there. He wouldn't have had to pay any fees, and it would have been really convenient for him to defend the suit. I don't know if I would have won. I don;t know if I would have won the arbitration hearing either. I do know that a Pennsylvania arbitrator deciding a case involving a local tourist who feels he was swindled by an out of state mechanic who knew he was in a desperate situation is not going to feel too much sympathy for the mechanic. He's also not going to be familiar enough with Colorado law to offer a sophisticated analysis of the legal issues. I might not have gotten what I was asking for, but I would have gotten something. In any event, since the expense is borne by the merchant, there's a strong incentive on their end to resolve the matter quickly. It may not be great for malpractice cases involving hundreds of thousands of dollars, but for little shit like this it works much better than the court system.

Just as a preliminary matter, I looked up some statistics at work today on the issue, and they were surprising. The average malpractice settlement is around 350k, and the average verdict is around 1 million. I thought these numbers were low, but I saw them quoted in multiple sources, so I'm going to assume they're true. I practice product liability and toxic tort law, and while I don't know the total settlement average due to the number of defendants, we usually estimate verdicts in the 2 to 3 million range for someone with cancer, even if it's an older person in bad health. There are very few verdicts we can use for comparison, but they're almost all significantly larger than this. That being said, I saw another statistic suggesting that 80%–90% of cases with weak evidence resulted in defense verdicts, 70% of borderline cases did, and only 50% of good cases did. This suggests that juries really don't like awarding damages, but when they do, they go big. In my line of work a defense verdict is highly unlikely, so 1 million may be a reasonable amount if you consider that the modal jury award is zero.

I also learned that 29 states already have tort reform that limits non-economic damages, including some big ones like Texas and California. These caps range anywhere from 250k to 1 million, but they're still significantly smaller than what you'd expect from a jury. Without non-economic damages, it's pretty hard to get to these huge amounts, since they are by nature designed to put a dollar amount on what's priceless. For economic damages to get truly large you'd have to have something like a high-earning plaintiff who is totally disabled and needs to be in a skilled nursing facility for the rest of their life, and even then I can't see it getting above 20 million or so. To show you how we'd calculate that, say you have a 25 year old who makes 100k a year and is permanently disabled. That gives you 4 million in lost earnings assuming retirement at 65. However, if he's entitled to disability payments totaling $1500/month, you'd deduct that leaving you with about 3.2 million. If the skilled nursing facility costs 10k/month and he's expected to live an additional 50 years, that's 6 million, except medicaid is covering part of that cost so you have to deduct that. Add on the medical bills and other stuff and you might get to ten million, which is steep, but nothing like 70 million for pain and suffering alone. And this isn't something the plaintiff is just going to assert out of thin air; they have to show medical bills and hire an economic expert to estimate future earnings and costs. To address your points:

  • I read an NIH study discussing defensive healthcare, and the results were inconclusive. While surveys showed that something like 75% of doctors agreed that they did it, there was no attempt to quantify it, and the NIH study conceded that relying on self-reported questionnaires has its limitations. A study in I think Florida that compared cardiologists who had been previously sued against those who hadn't showed that ones who had been sued ordered 1% more tests, but again, this kind of thing has its limitations. From what I can tell, malpractice insurance premiums are as much as 50% lower in states with award caps than those without, but whether this has any effect on the amount of defensive medicine practiced is anyone's guess. I certainly haven't seen any suggestion that doctors in California are less worried about malpractice claims just because the risk of real whoppers is limited.

Like I said earlier, trials are rare. Something has to go seriously off the rails for a case to go to trial. While caps eliminate some of the tail risk of claims, they don't seem to eliminate the amount of total claims. It's worth remembering that most claims are going to settle well within any reasonable award cap. Even in states without caps, while plaintiff's attorneys may dream of huge awards, they're really a mixed blessing. A settlement offers cash almost immediately; a jury verdict means waiting months for a shot at a large judgment that may get appealed, keeping the money out of your hands for years. If you get sued as a doctor it's more likely to be the kind of case that settles for 200k than the kind where a jury awards a multi-million dollar verdict. The only thing I can conclude is that even if you were to make large awards impossible, as long as you're allowing any kind of malpractice suits the insurance companies are going to want to limit the risk, and if that means defensive medicine, that's what you're going to have to do.

  • What you're describing here already exists, in a way. They're called arbitration panels. Arbitration is a form of alternative dispute resolution where an arbitrator or panel of arbitrators is selected by the parties to hear the case and make a determination. The arbitrators are attorneys who have experience in the relevant area of the law. The way it would work in a malpractice action is that if a neutral arbitrator is required, an independent agency like the American Arbitration Association would provide the parties with a randomly-selected list of 15 medical malpractice arbitrators. Each side would get to reject 5, and the arbitrator would be selected randomly from among the remaining names. If an arbitration panel is needed, each side would appoint their own arbitrator, and the arbitrators would agree on a third neutral party. The procedures are much more informal than in court. Discovery is limited, the rules of evidence don't apply, and the arbitrators may limit what testimony they'll allow and even if you can cross-examine witnesses. For instance, instead of taking depositions you'll get the relevant fact witnesses to submit written statements, and the expert witnesses will submit their reports and that will be the end of it. There is no right to appeal.

Requiring arbitration isn't something you have to wait around for the state legislature to require; doctors and hospitals can and have put mandatory arbitration provisions in their patient care agreements. If the patient doesn't like it, they can choose another doctor or go to another hospital. But these provisions are actually becoming less common than they were a couple decades ago. Why? Because the average settlements are higher.

For whatever reason, arbitrators (and judges) love splitting the baby. With juries it's all or nothing. With arbitrators, it's like they calculate the damages and make the award based on how strong they think the case is. They aren't going to give out a bonanza in any circumstances, so the ceiling is lower compared to juries. But the floor is higher; a weak case that would result in a defense verdict at trial is going to result in at least some award in arbitration, even if the award is small. And since the process is significantly less expensive than litigation, the whole calculus changes. If I go to a traditional trial I'm going to spend a ton of my own money in exchange for, at best, a 50/50 chance of getting a favorable verdict. In arbitration, the marginal cost of going all the way is lower, and the chance of walking away with something is higher. There's less of an incentive to settle, so if the defendant wants to make the case go away he's going to have to offer something close to what he expects the award to be. Realistically, though, in arbitration the plaintiff has no real motivation to settle, so what you end up with is an arbitration award that ends up being more than you would have paid in a traditional settlement, and since the process is so frictionless for the plaintiffs, they're going to file more suits.

Now, you could say that you meant that this panel should include doctors and not lawyers, or maybe a combination of the two, or maybe that you didn't mean arbitration but a more formal system like trial but with an expert panel instead of a jury, or whatever. Just keep two things in mind. The first is that the system is designed to compensate people for injuries, not to make things easier for doctors. The effects of malpractice suits on medicine are unfortunate, but as long as we believe that people who are injured by malpractice are entitled to compensation, they will persist. You may think certain cases are bullshit, but the plaintiff is still suffering, and I'm saying this as part of the defense bar. The other thing to keep in mind is that there's no reason to believe that some alternative fact finder is going to do better than a jury. You can change things, but you may not like the result.

Tort reform is less low-hanging fruit than a buzzword that refers to a set of vague policy ideas that only have a tenuous relationship to actually reducing the number of lawsuits. The exception is that when it's done really aggressively, in which case it pretty much bars all lawsuits excepting the few that meet stringent criteria. Most of this is based upon a myth that what's driving these costs isn't just lawsuits but frivolous lawsuits. And yeah, any News of the Weird type publication can show you all kinds of examples of clearly frivolous suits, but these are a distinct minority, especially in medical malpractice litigation.

The thing about medical malpractice and most other personal injury suits is that they're already expensive to litigate. Plaintiff's lawyers aren't going to take a case unless the damages are enough to make it worthwhile. Just to start with, you're going to need medical records, which are going to cost hundreds and can easily run into the thousands if there was a lengthy hospital stay. Then you need attorney time to go through these records. You need to depose witnesses and order transcripts; you're looking at least at deposing the doctor you're suing, the Plaintiff (or other damage witnesses if the plaintiff is deceased), and possibly other medical personnel. Once you've gathered this information, it's useless unless you have an expert who can explain to the jury why the doctor's conduct deviated from the standard of care, so add another 10 grand or so to get an expert report.

And this is all just to get to the point where you can talk settlement with the defense. If the case actually goes to trial, tack on another $60,000–$100,000 in time and expenses to see the case through to verdict. The upshot is that very few plaintiff's attorneys are willing to take on "frivolous" cases. Ideas like imposing the English rule where an unsuccessful plaintiff has t cover the defense costs is ultimately irrelevant in a legal environment where 99.9% of cases are settled before trial. It may make some defendants more likely to take a chance on borderline cases, but there aren't many of those.

Not very, since bar associations are professional groups without any power. Attorney qualifications are usually set by the state supreme courts, and the Federal government only requires that their lawyers are barred in one state or DC, so unless everyone is on board it will only lead to inconvenience.

With gun ownership, I think the discrepancy can be traced more to the proliferation of hobbies that began in the late 1960s. For my grandfather's generation, if you were an outdoorsy person and wanted a hobby you were pretty much limited to hunting and fishing, as well as day hikes. These days we also have backpacking, mountain biking, whitewater paddling, rock climbing, xc skiing, and other stuff to choose from, which all require a significant investment in time and money. I'd hunt if I had unlimited time, but since I have to work for a living every day I spend hunting would be a day not spent hiking or on the bike, and when you add normal social obligations and chores into the mix that's not a lot of days to begin with.

In the meantime, gun ownership has turned into a hobby of its own. My grandfather owned a lot of guns, but they were all for hunting. I don't even think he owned any pistols. Hell, prior to the 1990s it was difficult to impossible to concealed carry in most places. Now I have friends who own a lot of guns, and whose participation in the hobby seems to end there—they don't hunt and I never hear them talk about going to the range or anything like that. So diversification takes away a large part of the traditional base from gun ownership but adds a new base for whom acquisition is more important than having a specific use. If I'm a hunter in 1965 I probably only need a 12 gauge and a deer rifle and maybe a .22. Now it's de rigeur to own an AR even if the real-world applications are limited.

I find it hard to believe that the Steelers will win this game. In the past I talked about defensive strategy and how to contain Barkley, but I made the mistake of analyzing this game the way a normal person would analyze it. The Steelers had their trap game against Cleveland a few weeks ago, and it had all the classic trap storylines: Short week, divisional opponent, road game, playing a team that's so bad that beating their principle rival at home is akin to winning the Super Bowl, dreaded Thursday game, coming off a hard-fought victory against your principle rival and main competitor for the division title (I'm not including the weather because it wasn't bad until the second half, and the Steelers outplayed the Browns then anyway). This isn't so much a trap game as it is a game where the benefits of winning don't necessarily outweigh the costs of losing.

First, we have to talk curse. The Steelers haven't won in Philadelphia since October 24, 1965, when they beat the Eagles 20–14 at Franklin Field. It was one of two Steelers victories that season. TV announcers like to point out that the Steelers have only had three coaches since 1969; what they don't tell you is that they were firing coaches all the time before that. The coach in 1965 was some guy named Mike Nixon, who was only head coach for a single 2–12 season. The quarterback for the Steelers was Bill Nelson, who passed for 79 yards, a touchdown, and an interception. The whole team had a whopping 139 yards of total offense, and won thanks to two defensive touchdowns. The Steelers were somehow 8 point favorites in this game despite being 0–5 leading into it, though Philly wasn't much better at 1–4. The Steelers would begin their losing streak in Philly the following two seasons, but that was when they were perennially terrible and expected to lose. In 1969 they hired Chuck Noll, drafted Joe Greene, and commenced the only true rebuild in franchise history.

It was that 1969 season when the curse began. The Steelers were getting pasted, and rookie Joe Greene was pissed that he was being held all game and not getting flags. In his frustration, he grabbed the ball from the center before it was snapped and threw it into the stands. In the ensuing decade the team would win four championships and go from Same Old Steelers to Super Steelers. But the cost of this was that they wouldn't win in Philly again. To be clear, this isn't some universally accepted curse, since I just made it up two days ago, but I like the idea that everything is controlled by curses and football gods so I have quite the repertoire of phony curses that I've invented to explain nearly every misfortune in the sports world. The idea of a curse is bolstered, though, by the teams that actually lost in Philadelphia. When I first heard about this losing streak, I assumed that maybe it was just luck; that they happened to be scheduled in Philly during seasons when the team was having an off year. Not so.

The Steelers last played there in 2022, when they were starting Kenny Pickett and the Eagles were Super Bowl contenders, so that one makes sense. But others don't. In 2016 they suffered the biggest loss of Tomlin's career (31 points) in a season in which they made the AFC Championship game. In 2008 they won the Super Bowl but lost 15–6 while Ben got sacked nine times. The 1991 and 1970 teams that lost there weren't good, but the 1979 team also lost despite winning the Super Bowl. Since the Steelers won their first championship, 2 mediocre teams have lost there, 2 were AFC Championship game participants, and 2 were Super Bowl winners. At his point I'm almost hoping they lose because it means a deep playoff run.

More seriously, you have to look at the Steelers' situation and consider what's at stake. They have a two game lead over the Ravens for the division, and a win in Baltimore next week will likely clinch it. They probably aren't getting the bye, regardless of what happens. George Pickens is questionable with a hamstring injury, and it's hard to see them scoring enough points against the league's best defense without him. Why rush him back when it's more important that he play against the Ravens? Let Russ fling the ball around to Scotty Miller and MyCole Pruitt and see what happens. Tell Joey Porter Jr. to give AJ Brown the Ike Taylor cushion so he gets used to covering guys without tackling them. Run shit from the back of the playbook just to show other teams that you're willing to run it so they can waste time preparing for it. If they go balls out they might lose anyway and go into Baltimore on a short week tired and banged up (followed by Kansas City on an even shorter week). The Eagles, meanwhile, are looking for a bounce back after almost losing to the lowly Panthers, and beating the Steelers would be a huge statement that they're for real. They've already got the division more or less wrapped up and they have a legitimate shot at the one seed, so this is a game to win. The rest of their season isn't difficult. Eagles should win this one easily.

Jazz doesn't have any cachet among the general public, I'll grant you that. But it does have cachet among critics and musicians, and I think that's where the problem lies.

From what I've read it looks like he was riding Greyhounds around and one happened to stop there.

Tagging @SteveKirk and @birb_cromble. Whether he has the ID and gun (and manifesto) on him absolutely matters. Remember Brian Laundrie? Every time a high profile suspect is at-large there are going to be sightings, and most of these are going to be superfluous. They aren't going to be able to hold a guy for murder based on the ID of a McDonald's employee in another state who had never seen the guy before. I don't even know how they got PC for the search, though I wouldn't rule out that he consented, and he already misidentified himself to the police, so he's not exactly making all the right moves here. But even still, had they searched him and came up empty they wouldn't have been able to arrest him; he's being held on gun charges (and also misidentification and forgery, though those alone wouldn't be enough to hold him). He hasn't been charged with murder yet, but when he is, the ballistics evidence, fake IDs, and manifesto will all be key pieces of evidence against him. Without that, right now we're looking at "this guy looks kind of like the person wearing a mask in the videos" (and one without a mask), which is a much tougher case to make.

It's not so much that we don't want smart people or independent thinkers as it is that we don't want overly opinionated people who will fuck up the deliberation process. A jury full of relative simpletons isn't a good thing because they won't want to pay attention, won't be able to understand the testimony or jury instructions and will instead just rely on whatever biases they have. The Chauvin jury was composed almost entirely of people with professional or managerial backgrounds. What we're trying to avoid is the kind of person who is overly opinionated and is unwilling to work with the other jurors. We need people who can deliberate, not just voice their opinions. If 1 juror gives the other 11 the impression that he isn't fully invested in deliberating and has already made an unchangeable decision, all it's going to do is piss of the other jurors and increase the chances of a hung jury.

That brings me to another aspect of your plan that was faulty: The presumption that you would be able to hang the jury on your own. Hung juries are almost always fairly evenly split. If you find yourself in a room with 11 people who are voting to convict after several days of deliberation, then it's unlikely that they're doing so purely for political reasons. If you haven't turned at least a few members around in that time, then you're probably wrong, and unless you're a total moron, you'll probably come around yourself. In a high-profile case such as this, there is going to be a lot of pressure for a verdict, and the judge isn't going to send everyone home just because you say you're deadlocked; the system is willing to keep you there a lot longer than you think they will.

I think it would be fascinating to hear about how trial lawyers approach selection in a big case like this one.

I can't speak for big cases, and there are differing theories, but a few general truisms hold. Basically, I aim to have a discussion with prospective jurors, not an examination. In big trials they might interview the jurors individually, but most of the time they bring them in 10 or 20 at a time. I'll start by making a general statement that I expect most people to agree on, just to get people comfortable with raising their hands. There will inevitably be someone who doesn't raise their hand, so I'll pick on that person first to see why they don't agree with everyone else (it's usually because the person is incredibly shy). From there, I try to focus on open-ended questions that don't suggest an answer and give the prospective juror a chance to elaborate on their views. I try to avoid anything that can be answered with a simple yes or no.

For example, in this case I might ask "In the past several years there has been a lot of discussion about how people are increasingly feeling unsafe on public transit. What do you think about that?" And this is where @ArjinFerman's comment ties in. Most people will speak freely about controversial subjects during voir dire. Most people will offer opinions that have the potential to get them booted. You don't know what my trial strategy is or what evidence is going to be presented. You haven't read all of the other jury questionnaires. You don't know where I'm going with my questions. If you think that straddling the line between both sides is going to work, you'd better be sure that you know what the sides actually are. If I'm the prosecutor on this case, I'm not trying to get a bunch of woke-ass do-gooders on the jury, because that isn't going to happen. I've probably accepted the fact that the jury pool is frustrated about erratic behavior on the subway and is sick of having to deal with it. Yeah, some people are more liberal, but they're going to be outspoken and probably get the boot from the defense. I'm trying to craft an argument at trial that acknowledges Mr. Penny's right to intervene but that the problem was in the execution. The only question is whether I think you're willing to accept my argument, and you don't know my criteria for that.

Just because it came off as competent based on initial reporting doesn't mean it was competent. He committed murder in one of the most heavily surveilled parts of the country. His entire stay in New York was known and public within 48 hours of the murder, and he was caught within 4 days. The only thing competent about this murder was that he wore a mask and nondescript clothing and left the area fairly quickly. Just because he wasn't a complete moron doesn't make him a criminal mastermind.

This creates a conflict of interest between the interests of the individual and the interests of the state, and it comes up much more than you think and probably has affected you at some point. Consider the following: A runs a stop sign, causing an accident that totals B's car. A policeman on the scene finds A at fault and issues a ticket for running the stop sign, the penalty for which is a $100 fine and points on the license. A pleads not guilty because he wants to avoid the points and it's customary for the state to agree to drop the points in exchange for a guilty fee where the defendant only pays the fine. A enters his plea a week after the accident, and the court schedules a hearing for two months after the accident.

Meanwhile, B is without a vehicle and puts a claim into A's insurance company. She is relying on the insurance payout to buy a new car, which she needs to get to work. Since the civil claim is rolled into the criminal claim, however, the insurance company can't pay out until the ticket is resolved, which it won't be for two months. Furthermore, B now has to be ready to present evidence at trial since she doesn't know that A just intends to get a deal and may be arguing that he didn't actually run the stop sign. Plus, there's always the risk that the cop just doesn't show up and she's the only witness available to testify, so she has to show up lest the whole matter be dismissed.

So now B is stuck waiting months for an insurance payout that A's insurer would have just paid, and making things incredibly more complicated than they need to be.

If over half the jury pool was rejected for spurious reasons then it doesn't sound like it was that easy to get on. I'm assuming you answered the voir dire questions honestly.

Wrongful death is a creature of statute, and as such the statute defines who has standing to sue. A rough approximation is that you'd have standing if you'd be entitled to inherit under the state's intestacy law.