@Rov_Scam's banner p

Rov_Scam


				

				

				
3 followers   follows 0 users  
joined 2022 September 05 12:51:13 UTC

				

User ID: 554

Rov_Scam


				
				
				

				
3 followers   follows 0 users   joined 2022 September 05 12:51:13 UTC

					

No bio...


					

User ID: 554

I understand what you're saying, and while I don't practice in criminal court or (presumably) your jurisdiction, my own experience suggests that this is unlikely to happen. As we all learned in law school, the practice of law is the application of the law to the facts of the case. Traditionally, pro se litigants who don't know the law argue the facts and appeal to a vague sense of justice. LLM is the complete opposite since the LLM usually doesn't know anything about the facts but will gladly generate pages upon pages worth of vague legal arguments based on the invariably vague instructions it was given. Even a really good LLM is ultimately limited by the facts the user inputs, which, most of the time, is few to none, because they see it as just a magic box that will spit out something that looks professional but really doesn't do anything. Hence you get a guy with a 300 paragraph brief that doesn't once even hint at the general kind of case that it is.

Overall, though, while I see this as a problem, I only see it as such insofar as acting pro se is generally. If a prosecutor is reduced to tears of rage because he has to respond to endless motions from a pro se litigant and cuts a favorable deal to get out of it, I don't see how that situation is any worse than if the same prosecutor has to deal with the same thing from a team of high-priced attorneys paid for by the father of a wealthy defendant. My concern here is less for the prosecutor and more for the pro se who wastes the court's time and doesn't get a deal when he would have got one had a public defender filed the one motion that had any merit. My concern with LLMs isn't much different than my overall concern with DIY legal solutions where people think they're getting a good deal because they save a little bit of money in the short term but end up getting screwed in the long term.

In 2009, the Pittsburgh Pirates lost a spring training game to Manatee Community College. In 2019, Chelsea lost to their own youth team. By your logic, the Pirates should have forgotten about Andrew McCutchen (who played in that game) and signed some of the Manatee players, who were readily available. I do not believe any of them ever got so much as a rookie league contract.

The US Women's team thing wasn't even that level of a loss, because it wasn't even a real scrimmage, because national teams don't do scrimmages. They were sharing a training facility in Texas with the boys youth team and when the youth team came over to watch them practice/get autographs they ended up agreeing to play a kick around game. They don't play games like this a tune-ups or anything because the team members would have played about 60 games per year between the pro and national teams. The only reason anyone even knows about this is because that particular camp was interrupted by contentious contract negotiations with US Soccer, and US Soccer decided to release the results without any context to gain leverage (while sabotaging their own product). Just think about it for one minute: If you're a player or a coach, are you going to risk injury by trying to win the game? Or are you going to treat it like a fun treat for the kids that under normal circumstances nobody would hear about? Look at the NFL; they play fewer games than soccer players but are averse to playing in the preseason and absolutely allergic to the Pro Bowl. Now imagine that you aren't making nearly as much money and that your pro career can end in an instant.

If the true value of AI is only in the hundreds of millions, the industry is even more fucked than an AI skeptic like me could imagine.

That isn't really any different than it is now, though. If a dedicated NIMBY group wants to oppose a project by any means possible, they can hire a lawyer who will argue that §902(a)(4) requires a bat survey, and they might win. A citizen group that generates a massive filing that that nobody understands and basically just list arguments looks impressive from their perspective but gets shot down immediately whenever they have to go in front of a judge and the attorney for the municipality or whoever explains to the judge why §902(a)(4) doesn't apply in this case (which if it obviously did, they would have already done one), and may even produce a report from the guy who does the bat surveys explaining why one wasn't needed in his opinion, and the judge smiles and nods while the pro se NIMBY guy fumbles through his brief trying to find the part about the bats that he can't remember because he didn't even read the whole thing let alone understand the whole thing, and the judge tosses the complaint.

Like @faceh, I too have had the displeasure of witnessing a pro se litigant attempt to argue an AI slop motion in front of a judge. A 300+ paragraph AI slop motion. It was a post-trial motion. And I heard quite a bit of it because all the litigant could do was read it verbatim. After 20 minutes, I still had no idea what the case was even about, because he evidently didn't know that lawyers have to argue the facts of the case. After it became unbearable, I realized that since a TV show was filming in one of the courtrooms a friend of mine from high school who works in the industry might be there (I run into the guy once every few years), and even if he wasn't it would give me something to do while I waited for my case to be called. Sure enough, I saw him as soon as I left the courtroom and caught up with him for about a half hour. When I came back, the guy was still reading from his brief, and the judge told him he wasn't going to listen to the whole thing and cut the guy off while he gave the defense a chance to argue. It was only then that I was able to glean that he had apparently sued Hertz rental truck for being injured on their property, and that AI evidently didn't tell him that his mother was not qualified to act as a medical witness, or prepare a proper defense to their motion in limine that would allow her to testify as a damage witness. When the judge went back to the guy for his response he just continued reading from his brief.

Honestly, I think AI actually makes things worse for pro se litigants because at least before, judges were willing to cut them some slack and argue the facts of their case in a more informal way. Their deficit was that they didn't understand the law well enough to argue the facts effectively. Now they can generate pages upon pages of legalese they don't understand but think is the magic bullet that separates them from the lawyers and that they'll be able to wow the judge with their mad legal skillz. All the judge is going to do is smile politely during their argument and rule against them, because they haven't said anything.

I'm not a programmer, so what you just said is all Greek to me, but I'll take your word for it that what you described represents a significant departure from the expectations that the AI horny would lead one to have concerning the capabilities of the product. But they can always respond that these are problems that are solvable, and with the technology in a constant state of flux we can expect that in the coming years things will only continue to improve, since it was only very recently that even that level of functionality wasn't possible. My concerns with AI go beyond that, though, to problems that don't seem to be solvable in the short term and that have only gotten worse in recent years. These are more business-related than technology-related (though the limitations of the technology do factor in), and threaten the entire viability of AI as an industry.

I use Photoshop quite a bit. During the pandemic, though, my graphics card crapped out, and since they were in short supply, I replaced it with an old one from 2014 I had lying around. Since I don't play games or anything this was a perfectly acceptable solution, except that at some point newer versions of Photoshop started offloading some of the workload to the graphics card, for which mine was hilariously out of date. While the newer versions technically worked, there was a certain wonkiness that prevented me from adopting them full-time, and I continued using an install of Photoshop 2018, which was more than adequate for my purposes. In the meantime, I noticed that a newer version I had installed had incorporated "neural filters" aka AI into the program, which of course it did, and I fooled around with this a bit. Some functions were fun, if limited, while others, like upscaling and automatic scratch removal, didn't seem to do anything useful. But whatever. A few weeks ago I finally got a new graphics card after the old one gave up the ghost, and I looked into Photoshop 2026 to see what had changed since 2025. The answer was that the updates were basically all AI-driven, and not in a good way.

Adobe has been a convenient punching bag for the enshittification trend as of late, and the purpose of this post isn't to pile on, but to illustrate how it's representative of a greater rot in the software business and how AI only seems to accelerate that rot. Like previous iterations, some of these AI features are impressive, and some or stupid, but all of them cost extra. The way it works is that you get a certain amount of credits depending on your subscription (and as a long-time customer of the Photoshop-only plan I get a generous number of credits), and each time you use one of these features it costs a certain number of credits. And if you run out you can't just buy more, but you have to upgrade your subscription, and I already get the most credits you can with an out -of-the-box subscription that doesn't involve going through their sales department. To make matters worse, determining how many credits a given action will cost isn't based on a set rate but depends on 900 different factors, and is so complicated that the software can't even tell you how much an action will cost before it's run. And as a final blow, they don't even provide a way of telling you how many credits you have remaining; you eventually just get a message that you've run out.

The latter problem is obviously part of Adobe's slimy sales tactics where they want users to be unable to plan ahead so that they unexpectedly run out of credits in the middle of a time-sensitive project and are forced to upgrade, so I can choke that up to normal corporate bullshit. The former problem is due to the fact that there is simply no way of predicting how much compute an AI system is going to use until it's already used it. The real kicker is that, due to the inherently unpredictable nature of generative AI, you don't even know if the command is going to achieve the desired result, or how many attempts and tweaks it will take to get the desired result, and it may take multiple, expensive generations just to get something usable. The result is that the function is inherently self-defeating. There are lots of Photoshop functions that may require tweaking or not work at all, but they're integral parts of the software and aren't costing the user anything but time if they don't get things right on the first try, and the individual user will get more proficient with experience. The AI features are simply a black box that requires you to throw an unknown amount of money at it and hope it does what you want it to. I, as a user, thus am disincentivized to bother learning how to use these features because my access to them is liable to be cut off at any moment, whereas my existing workflow works fine as it is.

This is basically the problem with the whole "AI as a service" model these companies all seem to be banking on. If the response to Photoshop 2026 is any indication, customers want cost predictability and function predictability. If Microsoft Word cut you off after 1 million words per month it would seem less like you were buying software and more like a free trial. It would be even worse if the number of words you were allowed to type depended on font, font size, formatting, etc., and you didn't know how many credits each action you would take and were liable to be cut off while in the middle of writing something important. Luckily, I can use Word to my heart's content without it costing Microsoft any extra, so they have no reason to impose such a restriction. With generative AI, on the other hand, every action costs the company money, whether it benefits the customer or not, and the company can't predict in advance how much money that's going to be. So there's no way an AI company can realistically charge based on use without pissing off their customer base, who will cancel after getting that first $75,000 bill in the mail that no, they aren't paying.

Charging a flat monthly fee for unlimited usage doesn't solve this problem so much as stick the provider with the bill instead of the customer, so most of the AI services have resorted to a deceptive hybrid model where it looks like you're getting unlimited usage but has asterisks stating that it's subject to a cap, which caps are never explicitly defined. Some charge a monthly fee for access to a certain number of credits, which don't roll over at the end of the month. I'd find a lot to criticize about these models, which wouldn't fly in any normal business sales situation and would be relegated to the scummy end of the consumer pool in any other context, except that they still manage to lose money for the big players. Third-party agent developers may be profitable, but it's only because they're already buying their compute at a discount.

The only conclusion I can draw from all this is that software as a service, while loathed by customers, isn't really beneficial to companies either, other than as a cheap way of temporarily boosting numbers. And that's indicative of a deeper problem in the tech industry as a whole, a problem of their own making. From the 1980s through the 2000s, the computer industry grew exponentially. In the 1970s computers were things that large corporations and government agencies had to manage large databases. In the 1980s they became productivity tools that every employee had on his desk. By the mid-90s, home adoption had started in earnest, and by the end of the decade practically everyone had one. In ten years the internet went from being a hyped curiosity to an essential utility. The technology was also changing quickly, and the improvements were massive. In 1994, a typical home PC had a 486 processor clocked at 66 MHz, 8 MB of RAM, and a 500 MB hard drive. It would run Windows 3.1, which would be replaced a year later with Windows 95, a huge upgrade. Five years later that computer would be hopelessly obsolete; in 1999 a comparable build would have a 450MHz Pentium II, 128 MB of RAM, and a 13 GB hard drive. It would run Windows 98, which would be replaced 2 years later with Windows XP, and even bigger upgrade that eliminated the finickiness of DOS once and for all.

By 2010 CPUs would be clocked in the gigahertz and run multiple cores, RAM would be measured in gigabytes, and external hard drives of more than 1 TB would be affordable. Windows 7 was released the year prior to great acclaim. To put all that in perspective, I'm currently writing this on a Lenovo Thinkpad from 2024 that has the same amount of RAM as the currently-avalable model, which has the same amount of RAM as my home PC build from 2019. Or 2018; I can't remember the year I last did a major upgrade, but I haven't done any since before the pandemic, aside from the aforementioned graphics card. I haven't needed to upgrade it either, as there hasn't been any decline in performance in the tasks I actually use it for. And even that upgrade didn't appreciably improve performance from the 2014 gear I was running before that. Windows 7 was the last Windows release that was universally loved; every one since then has been met with varying degrees of derision. There had been flops before, but Vista was too far ahead of its time to be usable, and ME was a half-assed stopgap that never should have been released. The only mistake in this vein since then was 8, which completely misread the future of computing. Every new Windows since then has been an unexciting incremental upgrade that would probably have worked just as well as a security patch for 7.

I don't want to overstate my case here and suggest that computers haven't improved in the last 15 years; I'm sure my 2014 build would be woefully inadequate by today's standards. The point is that the advances aren't coming as fast as they did in years previous, and when they do come the improvements are more subtle. It feels like 2010 was the year that computer technology reached a mature phase where all adults, even your grandparents, knew how to use it, and good technology was as cheap as it was going to get. This wasn't clear at the time, but in a few years it was apparent that things had stagnated. In the early 2010s I listened to TWIT semi-regularly, and it didn't seem like there was much to get excited about. The two big things that the industry was pushing as the next frontier at the time were wearables and IOT devices. The former flopped spectacularly. The latter had better market penetration, though some of the implementations were ridiculous, and the whole concept has since become a metaphor of how technology has gone too far, trading simplicity and security for dubious functionality. As hardware stagnated, software quickly followed suit. Improvements in software follow improvements in hardware, and with hardware capability virtually unlimited, there was nowhere left to go. Sure, there would always be new features, support for new devices, and better security, but the game-changing upgrades seemed like a thing of the past.

So take a program like Photoshop that was first released in 1990 and had improved leaps and bounds by the time CS6 was released in 2012. A lot of users contend that this was peak Photoshop and that everything since then has been unnecessary bloat. I am not one of those people; the current software is significantly better. But CS6 was also the last version to be sold as a standalone product. Adobe had good reasons for doing this at the time—Photoshop was an incredibly expensive professional grade product that also had broad-based appeal. This meant that it was particularly susceptible to piracy, and lost more money to piracy than more modestly-priced products. They had tried to combat this in the past by releasing less expensive consumer-grade versions like Elements, but these never really took off, as consumers felt like they were missing something (most notably, Elements did not provide access to curves, which every photography book agreed was an essential tool). The decision to go subscription would give consumers access to an always-up-to-date full version of the product for less than it would cost to upgrade every other release.

The crowd who insists that CS6 is better is dwindling now, but even in its heyday it was mostly composed of people who had never actually paid for Photoshop and were mad that it was more difficult to pirate. But when Creative Cloud was first released in 2013, much of the criticism came from professionals and actual customers who were concerned about the new model. Sure, it was cheap now, but what was stopping them from jacking up the price in the future? Creative professionals aren't exactly the most highly paid. In the past one could upgrade whenever he could afford to and, if necessary, stick with a legacy version until things improved. But making one's continued access to software they needed for their job dependent on paying a ransom that they might not be able to afford was a different story. The reaction may have been better if CC offered a significant upgrade over CS6, but rather than wait a few years and offer a significantly improved version, CC came out earlier than one would expect and didn't offer much of an upgrade. Accordingly, the new subscription model was the only noteworthy thing about it. To Adobe's credit, the subscription price didn't change at all for over a decade, but in hindsight, there weren't any game-changing upgrades, only incremental improvements. If the company had simply relied on customers paying full price to upgrade whenever they felt it was worth it, they may have been waiting a long time.

As SaaS has matured from those early days, it has become less about preventing piracy and more about anxiety that newer products won't differentiate themselves enough from the old to merit the user to upgrade. Better instead to lock in that revenue stream with a user subscription that's impossible to cancel short of telling the bank to stop paying. Unfortunately, as a business move it's a one-time thing; make the number go up as all the old customers switch to subscriptions, but once they're aboard, the line flattens out again. In normal industries, this isn't a problem. In the computer industry, 30 years of exponential growth being not only welcomed but expected meant that the situation was unacceptable. Since there was nowhere left to go technologically, the industry had to resort to cheap gimmicks to keep the numbers up. SaaS was one. The aforementioned IoT was another; nothing better than announcing huge deals with appliance manufacturers who will be integrating your products. The problem with gimmicks like this is that, while they can increase revenue, they have a shelf life. A deal with Whirlpool to make a smart fridge may make both of your numbers go up, but once you have computers in every fridge sold, exponential growth is no longer possible. By the 2020s, the tech industry was running out of gimmicks. I think the reason Apple became the top dog during this period is because they were the only tech company that didn't seem to be peddling bullshit. I had a friend who was in and out of tech startups during this period (I even interviewed at one of his companies), and every idea was based on a free service that was really just scaffolding for advertising or data harvesting. A company like Apple that still sold products and services they expected customers to pay for was an outlier indeed.

So AI came to save the day. I'm not denying the fact that the technology is impressive and potentially useful, but it is just about the biggest gimmick one could imagine. Because simply being impressive and useful puts it in about the same league as, well, Photoshop, which, even in its first iteration, was a revolution to anyone who had ever worked in a darkroom. Unlike Photoshop, though AI promises to solve not one particular problem, but all of the problems, including ones that haven't been identified yet. This latter point is particularly salient, because exponential growth in the tech sector was never based on the present, but on the future. If the tech industry in the 2010s looked like it was in danger of stagnating and becoming a normal industry, in the 2020s the sky was the limit. It was now worth it for capital to invest all of the money in AI companies, because if they were successful, then money wouldn't matter anyway.

And if they weren't successful? Well, they never considered that possibility, because the line only moves in one direction. The equation is pretty simple: If AI companies are successful, then your support was worth it and will be repaid. If they aren't successful, then you need to give them more money. But what happens when the money isn't there? How good Photoshop's AI features are is ultimately secondary to how much they cost. Someone has to pay for them, be it the customer or Adobe. Some companies may be willing to subsidize AI, but if Adobe is willing to give product away for free, they'd do better by dumping CC and charging $500 for CS7, but we know that ain't going to happen. Instead, they've raised subscription prices by 50% in an attempt to get customers to pay for the privilege of having access to functionality they have to pay extra for if they actually want to use. I doubt it's a coincidence that the first substantial price hike in the history of CC coincides with the introduction of the expensive AI upgrades. I doubt Adobe will suffer much for it, because their business (like Apple's) is actually sound, and their products indispensable, but it's indicative of the perversion that's at the center of the tech world. Eventually, somebody is going to expect to get paid, and the party will be over. And as I write this, I don't see any scenario where the money is going to be there.

If they set themselves up as legal gambling companies, they wouldn't have as much of a problem. But they don't want to deal with the regulation, which includes being banned in a lot of states, including some big ones, so it's worth it for them in the short term to insist that they're in some vague category that can't be regulated and do the minimum to appease the people who have the power to sic attorneys general on them. If they can stay out of the headlines it's better for business.

Force majeure is pretty rare and reserved for exceptional circumstances, not that they don't want to pay on a policy. If that's the case, they'll usually find another reason not to pay.

You're entitled to your opinions on the ACLU, but they have very little to do with this case, other than that they're paying the attorney who handled it. Their participation is incidental, and if it wasn't for their public profile, would be entirely unknown. News coverage of high-profile lawsuits rarely mentions the insurance companies that are actually paying for them.

Sophistry? No. If I wanted to engage in sophistry I'd argue a lot more motions than I actually do.

Isn’t it also the case as I’ve read that most parties sue in order to settle out of court?

I don't know if that's exactly the way I'd phrase it. Cases going to verdict are certainly rare, yes. However, a good number of cases are dismissed before it even gets close to trial, and a lot of cases will just sit on the docket because the plaintiff isn't motivated enough to get things moving. If a case is actually going to get in front of a jury then it means it has some merit and the defendant isn't going to take the chance that the jury decides it's worth more money than the plaintiff is willing to settle for. The plaintiff, on the other hand, isn't going to get greedy and pass up guaranteed money when they could be walking into a defense verdict. Add into this that the courts have a positive bias toward encouraging parties to resolve matters on their own, with some even requiring pretrial mediation, and it's so surprise most cases settle. The Newegg case happened because the calculus changed whereby it was cheaper in the long run for them to countersue in the hope that it would discourage future litigation.

As a litigator, there are any number of things I might take into consideration when making an argument in front of a judge, including favorable facts, unfavorable facts, favorable law, unfavorable law, and the kinds of arguments the judge tends to pay attention to. On thing I have never taken into consideration is whether my argument is intellectually consistent with an argument I've made in the past, even if I'm arguing for the same client in front of the same judge in a case with substantially similar facts to a case I've argued previously. Indeed, if a judge tells me he doesn't buy my argument, I'm not going to waste my time in a future case making that argument. If I did, I may be consistent in my opinion, but I'd be doing a disservice to my client, putting my own sense of moral consistency ahead of their very real legal jeopardy.

And here you are, saying you're infuriated because a lawyer whose prior stances you aren't familiar with is arguing in an area of the law that hasn't been relevant until very recently in front of a court that has repeatedly signaled that they have a tendency to find some lines of reasoning more persuasive than others. What's she supposed to do, proceed with an argument that she thinks is a loser because she is, in some abstract way, acting as a representative of "the left" and other people who have nothing to do with her or her case besides a vague association with "the left" have made similar arguments in the past? What kind of advocacy is that?

I think a better strategy is to just limit your consumption to trusted channels. I'm reluctant to watch anything that isn't by someone I've seen before. I may not be able to tell when it's 100% AI content, but a low effort video is a low effort video. It's pretty easy to tell when someone doesn't know what they're talking about and are simply summarizing a Wikipedia article, or LLM output for that matter.

At the time of the revolution, the colonies had their own governments and their own courts, which courts subscribed to the common law. Since there was no existing tradition of comprehensive legal codes, upending the system entirely would have meant creating a new civil law system from scratch, which there was no reason to do, since the common law had worked fine for 99% of cases, and they always had the opportunity to enact legislation for the 1% of cases where the common law was inadequate. Even to this day, we still rely on common law for the vast majority of the things that courts actually deal with on a day to day basis, and it continues to evolve in the individual jurisdictions, such that law students are vexed by having to learn majority and minority rules.

This is one of the areas where the current state of the market is objectively worse than in the pre-internet era. I remember when I was in college (the internet existed but hadn't subsumed everything) it seemed like every town had a video store that opened when the VCR came out in the 1980s, ordered every title that was available, and never threw anything out. The result was that you had independent shops whose archives included pretty much everything that was ever released on video. Sure, it might not be on DVD, and the tape might be in bad shape from having been watched 4 million times, but at least it was available. I remember they had a 5 catalog rentals for $5 deal, and the rentals were for a week, so it was kind of a weekly ritual to rent 5 movies every week whether I planned on watching them or not. They also had a byzantine setup that encouraged browsing because you never knew where you'd find anything, though they had a catalog you could consult. The new releases were obviously segregated, and they had the normal categories (comedy, drama, etc.), but the AFI 100 movies had their own section, as did "Black and White Classics", and there was something called the Video Vault that could have anything. I believe there was even a small LBGT section, definitely odd for a small town store in the mid 2000s.

They closed in 2007, well before streaming. I think it was a combination of OG Netflix and Redbox. I worked at a video store in high school, and 90% of our sales were newer releases, though the one I worked at didn't have much of an archive. It was part of a grocery store, and it became easier for the grocery stores to just put a Redbox machine in the lobby that would cover the dozen or so titles that actually made money. Netflix didn't make sense for new releases at the time, since you had to wait and could be on a list, but for movie buffs who would just put a hundred movies in the queue and watch whatever Netflix sent them, it was perfectly fine and didn't require as much effort. My roommate and I got the Blockbuster equivalent circa 2008 and I remember he spent an afternoon just inputting the entire 1001 Movies You Must See Before You Die list in, and we'd watch whatever came in. That was probably the peak of movie availability since they really did have close to everything you could think of, unless it was really obscure.

As soon as streaming became the main business it was over, because bandwidth considerations came into play, similar to the space considerations of Redbox, and it was thus impossible to keep an inventory of that size, especially when the licensing agreements were more complicated and probably required them to pay for rights even for stuff that wasn't in high demand.

There's also Kanopy, which has the added advantage of being free to a point.

If you seriously think that third world immigration is doing the kind of damage you're suggesting then I have some swampland in New Jersey that's for sale. Maybe you should consider moving to Pittsburgh? Only 4% of the metro population is foreign-born, compared to 14% nationwide and over 30% in places like New York City. We're also about 85% white, almost all non-Hispanic. I love my hometown, and the cost of living is low, but the population has been flat for a while, and before that it was actively declining. If you had been here 20 years ago I could have showed you working-class neighborhoods with high crime rates filled with drugged-out white trash. One neighborhood that looked like it was on the brink of collapse only turned around after the area's modest Hispanic population decided to settle there and revitalize the business district. The other one got significantly better once Bhutanese refugees moved to the area, though that area is still bad, and still 70% white.

Of course, none of these areas are that bad, and everywhere is full of people with names that end in vowels. If you want to see some real shittiness we need to go just down the road to West Virginia. And no, I'm not going to take you to hillbilly country, which would be too easy. I'll instead show you actual industrialized areas full of white Anglos that are shittier than anything you'll find in the Pittsburgh region. Wheeling-Pittsburgh Steel ran 9 mills in the Ohio Valley—Follansbee, WV; Mingo Junction, OH; Steubenville, OH; Martin's Ferry, OH; Wheeling, WV; Beech Bottom, WV; LaBelle, WV; Yorkville, OH; and Benwood, WV. There was also a huge mill at Weirton, and several smaller facilities. Most of that is gone now, but the area is significantly shittier than Pittsburgh.

But that's the wealthier part of West Virginia. If we keep going south, I can show you Chemical Valley, which is even whiter and more Anglo than the Panhandle, and the chemical plants are still in production, though Kaiser Aluminum at Ravenswood closed a long time ago, and Ormet closed in 2013. Jamie Oliver filmed a show in Huntington after it was dubbed the fattest city in the US, and it also probably has more fentanyl addicts than any city in the US. Just remember that if you buy a house there not to leave anything in the yard, like grills or lawn furniture or even children's toys, because they'll steal anything that isn't under lock and key. I can assure you that this area is free from the negative influence of dirty third-world immigrants, though.

I think that the "and" in the 14th Amendment, by imposing two conditions, makes it clear that one can be subject to US jurisdiction but outside of the United States. If the clause only referenced jurisdiction it would be a different matter. There are already people who aren't in the US by any definition of the term, but are nonetheless recognized as being subject to US jurisdiction. For instance, a man in Guatemala who enters into a business contract with a man in Texas might be subject to US jurisdiction even if he's never been to the US in his life.

There's evidently an issue where the home club expects a lot of away fans to show up for this particular game, and they don't want them dominating the home side of the stadium. I can understand why they don't want their stadium full of away fans, but it seems to me that warranting that you are a supporter of the team is one thing, but requiring proof before you enter is another. This isn't reasonable. I'm a long time fan of the Steelers, at least to the extent that I don't care about other teams, but I don't have any photos of me in Steelers gear. I own a ballcap I rarely wear, and a t-shirt that I do where but it's an obvious bootleg with the Grateful Dead skull and roses logo modified into a Steelers logo. I don't attend Steelers games or "events", unless you count watching games in a bar, and at that, it's not like people take pictures of my while I'm there. The only such photos I can think of are ones taken after the Penguins won the Stanley Cup, and that was in 2017. Hell, I went to Charlotte to watch Pitt in the ACC championship a few years ago (twice, actually), and I don't have any pictures from either trip. I don't know why they would expect their fans to have these pictures. It essentially means that buying the ticket isn't enough, and that there's an expectation that you buy their merchandise as well.

Here's the relevant terms and conditions:

Home Match Tickets are for the use of supporters of the Club only. By applying for the Home Match Ticket and/ or using the same you hereby warrant and represent that you are a supporter of the Club and/or that you are not a supporter of the Visiting Club.

It's pretty rare for Brits to call football soccer. Rarer still for them to be concerned about the availability of team gear in the states.

There were. The one that got me was the French colonies. I'm guessing that they wouldn't count India, and that most educated people wouldn't guess India, but only because most people don't know about the colony at Pondicherry.

Consider telling them that you're an American tourist and therefore cannot provide the requested photograph, but if the team is willing to send your family complimentary apparel and other merchandise you will gladly wear it and root for them. I have Photoshop and like to think that I'm fairly good at it, though who knows if it will be enough to fool anyone looking for it. DM me if interested. That being said, I think being straight with them would be better, up to the point that it might be worth making an international phone call to get it sorted out.

I don't know what Squid Game is either so that doesn't help.

This article feels like the Chinese equivalent of trying to evaluate the dating marketplace based on "First Dates from Hell" segments on Morning Zoo radio shows.

I know who Mr. Beast is, in that I recognize the name, but I've never seen any of his content. And content should probably be in scare quotes, since I'm pretty sure that it's all unwatchable filler that goes nowhere.