@ControlsFreak's banner p

ControlsFreak


				

				

				
5 followers   follows 0 users  
joined 2022 October 02 23:23:48 UTC

				

User ID: 1422

ControlsFreak


				
				
				

				
5 followers   follows 0 users   joined 2022 October 02 23:23:48 UTC

					

No bio...


					

User ID: 1422

you don't have access

I don't follow. Every part of you that is necessary to follow the clockwork has appropriate access to the mechanisms of the clock, at least to the extent that is necessary for it to be able to follow the clockwork. If there were some part of you that didn't have such access, then it wouldn't be able to follow the clockwork, and we would reach a contradiction.

Like, maybe try to explain how this works directly on the example of analyzing an actual clock, with determined suboptimal action y' and a hypothesized optimal action y. What doesn't have access to what?

I think I have now concluded my argument that there is no contradiction, once one tries to explain how the contradiction is supposed to work.

I don't see a contradiction at all. This proposed unmoved mover is already clearly an exception to the general rule of requiring a prior mover, apparently preferring avoiding infinite regress over having an exception. It is simply not moved by some prior cause. That's kind of it? I think you'll have to be more explicit about how you find a contradiction.

What's confusing is that I'm missing an argument. Some sort of, "Here are some premises, and here's a conclusion," sort of thing.

I'm not quite seeing an argument yet. Go on?

I mean, kinda no? That's where the Wolpert/Benford critique comes in. You can't formalize the problem in terms of game theory without adding additional assumptions. If your additional assumptions to formalize it are, "It's actually a clock, and there's no feasible action set with cardinality greater than one," then sure, you have a suitable formalization... but it's kinda not game theory. If you want to back away from that being your additional assumption, it's kinda still on you to state other formal additional assumptions that make it a well-posed game.

EDIT: Perhaps another way of describing it would be as follows. Suppose one is just analyzing a clock. We'll discretize time for now just to make it simple. Say that we observe from our analysis that in the transition from time t_1 to t_2, the clock will become one second slow compared to some 'objective' time (handwave any difficulties here). We could observe that this is, in some sense, suboptimal, sure.

Now, does it make sense to say something like, "What if we just call this suboptimal action y' and hypothesize an alternative action y that doesn't result in being one second slow?" Would it make sense to say that we have constructed a decision theory problem? Note that we're not specifying anything about any sort of real policy space or anything; it's not like we're saying, "Here is the policy space of possible mechanisms that a non-clock can choose from to design the clock."1 We just have a clock.

Suppose we say that there is some being, Omega, who will accurately predict that said clock will take action y' and become one second slow, and then put some quantity of money in front of the clock. Suppose we say, "Well, imagine the clock took hypothetical action y, which it can't do, then imagine that Omega would put a different quantity of money in front of the clock in that case." Does this become a game theory problem? If so, what am I supposed to solve for? What is the space of possible solutions?

1 - This is perhaps related to my comment about what Yud did to the prisoner's dilemma problem. He created some different policy space about source codes.

I think this is non-responsive to my comment. Isn't god himself a "mover" in classical theology?

Jesus moves and changes yet he's the god that is not supposed to do either of those things

I already don't really follow. I thought the second word of "unmoved mover" was "mover". I didn't think classical theology posited an unmoved unmover.

In a clockwork universe, is there such a thing as decision theory, or a subset thereof known as game theory? It would seem to me that, sure, one could have a mathematical theory of optimization, extremal values, or even min-max theory, but it would not seem to me that one could view any such results as being prescriptive - I.e., "If you are trying to accomplish X, you should choose Y." Instead, it would simply observe, "You might by chance (or deterministic integration of physical differential equations or whatever) take action Y or Y', and it turns out that we can compute that Y is optimal for purpose X, while Y' is suboptimal."

That is, if one is an adherent to this conception of a clockwork universe, I think the way they would state their position on Newcomb's problem would be something more like, "You will either 1-box or 2-box, based on the movements of the clock. We can also compute from axioms regarding the clock's movements that 1-boxers will possess more money," and less like, "You're in this hypothetical situation where you need to think about the rational way to proceed optimally, and here is why you should choose to act in the following way." I think if such proponents presented their perspective in this way, it would be less amenable to criticism that their problem is ill-posed as a decision/game theory problem.

The axioms of decision/game theory seem to conflict with the axioms that seem to appear here. I guess one way to put words to that would be that one has a feasible action set within the underlying dynamical system that has cardinality greater than one. Perhaps another way to put it is that it does not seem to me that decision/game theory is applicable to clocks. The feasible action set of clocks has cardinality one. One does not ask how a clock should choose among non-identical actions, though one may observe whether a clock's deterministic actions are/are not optimal according to some metric.

Taking this alternative position would, I think, sidestep the criticism I relayed above from Wolpert/Benford, as what they were fundamentally trying to do was to formalize the problem within decision/game theory, where players have feasible action sets with cardinality larger than one. They observed that if you do this, you run into contradictions without further specification. But it would seem like, sure, if you give up on that, give up on saying that it has anything to do with decision/game theory, that it's more like just making an observation about clocks and optimality/suboptimality, then I think you do avoid the critique.

My sense from the text is probably annihilationism.

You probably have to go more for AI atheists to find a god that makes future contingents like sea battles or box picking necessary and then also sets up a basilisk to torture you forever for the future contingents that it retroactively would have made necessary.

My sense tracks with that of @MathWizard. If you add some particular assumptions about the form of the problem, you can code it up, and likely, for a wide range of parameters, 1-boxing is higher EV.

I think the criticism of Wolpert/Benford is also similar in type. (Again, not really having spent sufficient time with it.) That is, they construct two possible interpretations. Either of them, you could just sit down and code. It may even be the case that for a wide range of parameters, EV still points to 1-boxing for both versions. However, my understanding of their claim is that those two codes will be very different. Even the strategy spaces are fundamentally different in their claim. And for a similarly wide range of parameters, the joint distributions will be contradictory. The point is not that the sign may be the same for this particular ratio of prizes; it's that there are just multiple contradictory ways to construct it.

Of course, someone could take the time and search out what ratio of prizes in the respective boxes produces maximum tension between the two interpretations, so that rather than having the two EV calcs mostly pointing in the same direction, we could maximize how often they conflict. That's kind of not the point of the critique, but I suppose it could be done if one found it necessary to really grok the difference between a well-posed and ill-posed problem. Though, like you put it, I probably can't be arsed to do it.

That said, I am almost motivated enough to try it (but it would probably have to wait a few weeks, and then, I'll probably be bored with it). I certainly don't know that we can for sure find parameters where the two possible games differ in terms of sign. If this problem was actually relevant to my research interests, I would absolutely just do it, because it's one where I have a vague sense of, "Wouldn't it have to be amazingly coincidental if the values were different, but the signs were always the same?" And when I sniff at the possibility that there could be an amazing coincidence like that, it's usually an indicator of a really interesting theoretical opportunity.

Wolpert and Benford argue that the problem is ill-posed for almost any error rate, so it's not clear that stuffing in a particular number actually helps resolve the problem. I haven't spent all that much time with this problem yet, so I'm not going to commit to saying that I think they're right about this, but it jives with my intuition.

Generally speaking, in order to have a well-posed game, one must be very formal and precise in many details. Particularly things like order of operations, allowable policy spaces, information sets, and details around estimators. I've become more annoyed by estimators in various problems over time, even apart from the relatively minimal thinking I've done on Newcomb's problem. One of the greatest sources of my criticisms in reviews of submitted papers (or even when my collaborators come to me with a problem set-up and/or proposed solution) revolves around not taking sufficient care around estimators.

I do think that Wolpert/Benford at least suffice in arguing that there are at least two possible formalizations that are sufficiently well-posed. I think it's probably on someone else to either bite the bullet and say they are clearly choosing one form or the other... or to provide a sufficient alternative formalization that makes the details more clear.

Aside on Yudkowsky, relevant for the discussion below and my thoughts generally on these sorts of problems. I wouldn't be surprised if he has/had something in mind like what he did to the prisoners' dilemma problem, with the business about source codes and such. There could be a way to try to resolve Newcomb's problem in a similar fashion, but my perspective is that it would still be proposing a very specific formalization... and one that is not at all just a clear instantiation of the initial problem statement. I might go so far as to say that in the prisoners' dilemma case, he just proposed a different problem, with different policy spaces. Interesting in its own right, sure. Probably correct for that particular formalization of that particular version of the problem, sure. But also kind of just a different problem. In general, even minor tweaks to these aspects of the formulation can result in different games.

Similarly for Newcomb's problem, unless one takes the step of clearly laying out in a formal way exactly what they're going to specify for the domain of the problem (and then, I guess, argue that this is like, 'the one true interpretation of the original problem' or something), then I'm probably going to lean toward just thinking that the original problem is so informally stated as to be ill-posed.

Against Talking About Anthropics/Possible Worlds/etc in the Sleeping Beauty Problem

I get it. Anthropics is an interesting topic. Possible worlds has a long and rich philosophical history. I can get why people might want to expose more people to that stuff, kinda squint at the sleeping beauty problem, then think that it's close enough to spread the gospel.

But that's confusing people.

It's confusing them on what is otherwise a very simple math problem.

For those who haven't seen my last entry, I made some minor modifications, primarily adding a second person, so we have both Alice and Bob undergoing simultaneous experiments. The simplest version is that they each undergo approximately the same experiment, with the same coin, but opposite (the implications of heads for one person are like the implications of tails for the other). I also had some computer communication between them for some instructive purposes, but that's not even necessary here.1

Let's follow Alice and Bob a bit further. Suppose after their one/two awakenings, they're put back to sleep, memory again wiped. They're both finally awoken on Wednesday. "No more questions," the doctors say. "We took the liberty of interpreting your answers as wagers. We have your home address. We'll compute your payout and mail you a check with your results, revealing to you how the coin actually came out, how you answered the questions (because they won't remember), and what your payout is. Expect it to take 4-8 weeks."

Alice and Bob leave their respective rooms. They run into each other in the lobby.

Wait

Can Alice and Bob run into each other in the lobby? Aren't they, like, in different possible worlds or something? No, silly. That's confusing people. They're in the same world. They've been in the same hallway all along, separated by only a paper-thin wall.

Ok, so they run into each other in the lobby. They hit it off, decide to go out to a pub and grab a pint together. Naturally, the conversation turns to the strange experiment they each went through. Neither one is going to know how the coin flip actually went or what subsequently happened for another 4-8 weeks.

They begin to debate. How should they best guess what their results might have been? What if they'd like to wager against one another about the results? Should they have significantly different estimates of what they're going to see in their results? Should Alice think that there's a 1/3 chance that they're going to learn that it was heads, while Bob should think that there's a 1/3 chance that they're going to learn that it was tails? Did they truly "update" their probabilities during the course of the experiment?

No. Of course not. If either of them thought that, you could take their money. They should both think that it was 1/2 either heads or tails. This is because they didn't "update" some probability estimate. They didn't enter some weird different possible worlds, where never the physical Alice and Bob could ever meet again.

Instead, Alice and Bob are both capable of having a perfectly reasonable conversation. "Yeah, of course I think the probability of the coin flip was 1/2. It's just because of the weird observation function of the experiment that I computed that there was a different probability for what I was likely to observe." "Yeah, me too, but my observation function was the opposite, so I computed that I was likely to observe the opposite. But obviously, at the same time, the probability of the coin flip was 1/2."

They're just different probabilities with different meanings. You can just compute them from the observation functions.

1 - That time, I was trying to get people to figure out that they could have one individual's brain retaining multiple different probabilities, with multiple different meanings. I guess this time, I'll just try having multiple different minds meeting.

Perhaps part of it is that married women who changed their name want to vote too.

As someone whose wife came from a foreign location where women don't tend to change their names, and can thus attest to a significantly higher-than-normal level of grief over the wife changing her name, getting US documentation that would be sufficient for voting is probably the easiest part of a married woman changing her name.

This is correct. My comment is mostly trash; it's a pointer to something interesting with just enough summary to put it in context and get over the top-level comment barrier. Rov_Scam's is good. McKenzie's is good.

How about in Variant 2? Should Alice do some weird anthropic probability shifting for what she puts into the computer for Bob? Should she do two different weird anthropic probability shifting things, one for herself and a different one for Bob?

...wouldn't it be sooooo much simpler to just say, "Alice is capable of distinguishing between the probability of the coin flip itself, the probability that she observes an outcome, and the probability that Bob observes an outcome," rather than some conceptual mess garbage about her simultaneously anthropically probability shifting for Bob opposite her own? Like, what do you even mean "anthropically probability shifting" now? I thought it was supposed to be something about updating a belief on the coin flip, itself, but it seems like you've already just admitted that that is not happening. She still has "the naive weight of the coin". She still knows about this probability as a distinct probability.

This is not a Quality Contribution.

This is a Quality Contribution. You really ought to just read the whole thing and maybe not even bother reading my comment.

Patrick McKenzie, if you don't know, knows a lot about financial infrastructure and its interaction with tech, regulatory, and human systems. He routinely shares his knowledge in mostly accessible form online. He is also one of the few authors where I would be shocked if I learned that he used LLMs in his written work. When I read him, he often plays incredibly subtly, almost understating his point, often making me have to think again to see if think he's making the implication I think he might be making. His writing is quite unique in my mind. The linked post is his sizable contribution to the conversation about the SLPC indictment.

When the indictment came out, I didn't really say much. I didn't have a lot of specific expertise on the legal case. I was generally suspicious of how one could draw proper lines around the idea of 'donor fraud', where non-profits are defrauding donors who usually give money to non-profits without any strings attached.1 I upvoted @Rov_Scam's comment to that effect. I don't want to denigrate it; I think it was a great comment, fully deserving of a Quality Contribution in its own right. However, I now (only with the benefit of hindsight of McKenzie's post) think it may have taken a bit too much of a gloss over the bank fraud charge.

McKenzie is very serious about the bank fraud charge. He appears to have lived and breathed a world where bank fraud charges are routinely brought and routinely won by the government. He recounts how incredibly easy it seems to be for the government to routinely win on these cases. I don't know that I have a good summary of this; again, you kind of should just read it. He seems to think that basically any lie to a bank will do (a single piece of paper or a single word, he says), and he goes on at length about the extensive record-keeping done by banks and how these systems allow both internal-to-banks investigators and external regulators to easily find the documents or communications to make such charges a done deal. He gives a plethora of examples of actual people going to prison for these exact charges to make his case.

He then turns to what may be more important for the broader Culture War. Sure, lots of conservatives are vaguely annoyed with the SPLC, but even if they get brought up on charges, how much does that really change in the world? He lays out the technical means by which banks evaluate their customers and their transactions. Some of this might be known to people who were already steeped in this portion of the Culture War, but I hadn't really realized until he laid it all out. Sure, I knew of stuff like OFAC, where the Treasury will give a list of foreigners/entities that US banks are prohibited from dealing with, and sure, they pay close attention to that list and scrutinize their customers/transactions accordingly. But they also use all sorts of other 'data products' to screen out potentially 'problematic' customers/transactions. One of the most widely used was developed by the SPLC, which if you're one of those conservatives who were vaguely annoyed by the SPLC but didn't know this already, get ready for your blood to boil.

Admittedly, as he points out, much of this was actually public information. I just never had it laid out in one place, in a way that really made it sink in what was going on.

Not just banks, but all kind of other tech/finance companies, including regular companies who have employer matching contributions to non-profits, use lists like those generated by SPLC, to filter who they transact with. They want to tell regulators that they take steps not to transact with The Bad People, and how else can they feasibly do that other than to just use the SPLC list? In one of those 'public, but I didn't really know about it/internalize it' moments, he talks about how Amazon used the SPLC list, and how Jeff Bezos talked about it in public Congressional testimony:

Jeff Bezos, in Congressional testimony, describing Amazon's reliance on the SPLC data product for AmazonSmiles, a now-discontinued charitable product they offered:

"We use the Southern Poverty Law Center data to say which charities are extremist organizations. We also use the U.S. Foreign Asset Office [sic] to do the same thing.”

Bezos was interrupted before he could finish his next thought; you're welcome to read the testimony for full context. He is clearly referring to the OFAC SDN list.

Bezos went on to elaborate that the Fortune 2 company could not operate AmazonSmile without some way to kick out the extremist organizations and that SPLC was, effectively, the only reasonable option. He asked Congress for other suggested data providers. None were offered. (No, really, he did that.)

Let us pause to acknowledge that Bezos, one of the richest men in the world, considers these two four-letter organizations as peers. One of them is created by statute, operates within constitutional and administrative-law constraints, and answers to Congress, the courts, and ultimately the people of the United States of America. It could jail Bezos, personally, for willful non-compliance. And the other is …some people in Montgomery with a very specific interest, whose decisions are subject to review by no court, and whose only power appears to be moral suasion.

Bezos was equally and entirely committed to satisfying both.

Why? We’ll return to it in a minute.

[Me here: returning to it after a minute]

Well, remember, when you bought the data product, you were also buying someone anticipating your concerns before you even voice them and preparing options before you ask. Jeff Bezos’ words echo in San Francisco today: Does anyone know another option?

[Me here: returning to it after another minute]

About a month later 15 Republican lawmakers wrote Bezos a letter, saying:

Amazon’s ongoing reliance on the SPLC, with its documented anti-conservative track record, reinforces allegations that Big Tech is biased against conservatives and censors conservative views.

The letter did not contain a recommendation for an alternative data product.

What's next is what may be the biggest impact of the SPLC indictment. Not some guys from some non-profit, no matter how influential, going to prison. Instead:

Now, a quiz: do you think Compliance at a bank is neutral on “Can the bank delegate transaction-level decisioning authority, in any part of the business, however small, to an entity under federal indictment for bank fraud? Does the answer change if they are convicted of bank fraud?”

No! Compliance will not let you do that! Not because they are worried about the integrity of the blacklist. An accused bank fraudster has the final say to approve money movement out of a regulated financial institution. That is very likely intolerable to Compliance.

That is, he thinks that all those companies, those banks, finance companies, internet companies, employers matching contributions to non-profits, etc. will probably have to stop letting the SPLC tell them who The Bad Guys are that they shan't transact with.

His post goes on.

He describes an alliance of non-profits, organized by SPLC, that he describes as having engaged in an extremely lengthy campaign to pressure companies. He describes the mechanics of how their pressure campaign worked, how they burrowed themselves into the policies and workings of many companies. Again, I find it hard to summarize, and you should read, but his persistent theme is to imply that these folks were claiming to be non-partisan in this non-profit work, but building an extensive case that they were clearly targeting partisan targets, and their entire operation dried up after their partisan targets seemed to be no longer a target.

In his typical understated fashion, right near the end, he tells a parable, presumably for those who have eyes to see and ears to hear. My interpretation of his parable is that non-profit law requires folks to actually be non-partisan. Of course, non-profit law is not McKenzie's specialty, so others closer to that world will have to chime in. But it seems to me that he's clearly indicating that he thinks it's plausible, perhaps likely, and if The Powers That Be haven't thought of it yet they probably should, for the gov't to continue going after various folks who were involved in this.

1 - For, uh, reasons, I am aware that people can and do attach strings to donations plenty of times. Moreover, I'm aware that from the non-profit's perspective, this can be quite annoying unless they've already chosen to build boxes for those particular strings (e.g., "We have a 'X Fund', and donations marked as going to the X Fund will be used in the X Fund"). In fact, my sense is that plenty of non-profits will simply refuse donations that try to attach additional strings that they don't already have boxes for.

I don't act particularly Indian, beyond a fondness for biryani.

It would be monumentally difficult for anyone to not act particularly Indian in this particular way.

In fairness, it trips up a lot of people. I would probably say including you. Last time we discussed it, you didn't come back to explain how your position worked, but my best interpretation was that your position thought:

Alice is smart enough and capable of distinguishing between "the probability that Bob observes an outcome" and "the probability of the coin flip, itself"... but is too stupid to distinguish between "the probability that I, Alice, observe an outcome" and "the probability of the coin flip, itself"?

I paid a decent amount of attention when they did the LLM-vs-LLM chess tournament. You could read a bunch of the 'thinking' tokens (I use single quotes not to make fun of the term, but to only note that it is genuinely difficult to unpack what the word does/does not mean besides being conventionally used for a particular set of tokens). Some of them were genuinely impressive. Some were outright gibberish. Obviously, they were typically better in the opening phase of the game, where there is likely gobs of information on the internet/in books spelling out the reasoning behind particular moves. But that is not to say that it was never impressive later in the game. Of course, that competition used a pretty significant harness that objectively retained the true state. To what extent that matters and/or can be overcome is an ongoing question.

One possibility for trying to make progress in testing this distinction is to consider chess variants, particularly novel ones that are very unlikely to have anything in the training data. 960 is almost this, but something about it is at least in the training data, even if very minimal in comparison; to start, I don't even know that I'd go that far. "Let's play a game of chess where the knights and the bishops switch starting places," might be a good start. Harder versions would be, "Let's play a game of chess where the knights move like bishops and the bishops move like knights." It's logically the same, but you have to keep track of a difference in notation as well as reasoning. I imagine this would actually make the game harder for most people, since they're so used to thinking in one way. Good players will likely make more reasoning mistakes in calculating longer lines, but will probably be able to double-check well enough immediately before making a move that they're not likely to attempt all that many illegal moves (unless they are pretty severely time-constrained). Classic engines would have essentially no degradation in performance (because you'd have to bake in the difference). I'm not quite sure how to think about what kind of degradation to expect from LLMs or, having observed some level (or no) degradation from them, how one would interpret it; but I'd be interested to see. One could get a bit more whacky, like, "Knights can no longer simply jump over pieces; at least one of the two possible L directions needs to be open," possibly also throw in for the fun of it, "Bishops may now jump over one piece along their route," or something. I played Knightmare Chess long ago when I was young. There are a ton of tweaks you can do to mess with stuff. For humans, it is fun to keep track of various rule modifications and try to reason through it.

At the very least, if LLMs absolutely tank in these sorts of variants, just spamming illegal moves all the time, while humans are able to at least moderately cope, it would be some amount of useful information. Of course, one must always have the disclaimer that it is certainly possible that with enough progress and compute, LLMs may even outperform humans. We sort of just don't know.

My concern would be that for someone who gets up in the morning and doesn't like who they see in the mirror, that surgery will not fix what ails them.

Are you being honest with yourself that you could just get one surgery, and then you would be happy? That it would remedy what gnaws at you?

I had a plastic surgeon come up in one of the podcasts I listen to. I remember him saying that there was a category of people, I don't remember the whole set of criteria, but I remember that it was young-ish men, that he simply would refuse to operate on, specifically for this reason. He had too many experiences of people in that category (again, I don't remember all of the qualifiers) exhibit this exact phenomenon, and they'd keep coming back for something else, then something else, then something else, and it just wasn't healthy for them.

I would have thought that it would be women who are more likely to have this problem, which is why it stuck out in my memory that he called out men.

Which is just another way of saying "they're irrelevant".

No, they'd just as relevant as any other individual voter.

Speaking generally, I don't know that I have a useful definition of being "relevant" or "irrelevant". I'm hearing very similar claims that, after Callais, gerrymandering can or will make many black voters "irrelevant". One could pithily retort that they are just as relevant as any other individual voter, but I don't think that would be satisfying to the person making the claim.

This is sort of precisely where I think there is a simmering culture war, the clash between your comment and that of @JTarrou.

Scoping out a bit, the stylized story I might tell would be that back in ye olde days of Snowden/Assange, there was this sense of "information is meant to be free" and "sunlight is the best disinfectant". My sense is that at least some of those folks had a change of heart when their own ox was gored. But I think it's still a significant culture war.

Are soldiers supposed to keep secret military operations secret? Or is part of the point of things like prediction markets specifically to say something like "information is meant to be free", even governments shouldn't be able to keep even that sort of stuff secret, and it's good to build tools with the "whole point" being to prevent folks from being practically capable of keeping even stuff like that secret?

I certainly don't think this culture war has been won in either direction. It's just sitting there, menacingly, underneath a variety of these related debates.

We have an indictment of a special forces soldier, who participated in the planning/execution of the Maduro raid, for making Polymarket bets on questions about Maduro and US involvement in Venezuela.

Specifically, Gannon Ken Van Dyke used USDC.e to trade on at least four markets: "Maduro out by ... January 31, 2026", "US forces in Venezuela by ... January 31, 2026", "Trump invokes War Powers against Venezuela by ... January 31, 2026", and "Will the US invade Venezuela by ... January 31, 2026?" The last of the four markets actually resolved to NO, but he sold his position at some point before he took losses.

He apparently didn't do a great job of hiding it. He transferred his winnings to "a foreign cryptocurrency 'vault' which advertises that it generates interest for depositors" and then a couple of weeks later, transferred them to his crypto exchange account. At some point after (the indictment doesn't say), he cashed it out and transferred it to a brand new Interactive Brokers account (which was presumably in his real name). The only steps they mention him taking to try to cover his tracks were asking Polymarket to delete his account (claiming that he had lost access to the associated email address) and changing the email address on his crypto exchange account. I think the implication in the indictment is that the original email account associated with his crypto exchange account was "subscribed to in his name".

To my knowledge, this is the first US prosecution of someone trading in 'war prediction markets' using classified insider information. Unsurprisingly, they throw in quite a few different counts, and I'm not qualified/would have to do more work to have a sense on whether some of them are unlikely to succeed (did he do a sort of "fraud" in some technical sense? somebody would probably have to know the case law of the particular statute).

Everyone has known that this sort of thing was possible; some have criticized prediction markets for even having specific markets that are vulnerable to this type of insider trading on sensitive national security matters. The buzz on military subreddits by soldiers is that they're confident civilian politicos have also made a bunch of money by trading on this stuff. Are they not getting prosecuted because they're connected to the powers that be, while lowly grunts have examples made out of them? Are others just better at hiding their tracks?

If I had one observation of my own to add, I would reflect on the nature of monetary incentives. They're potentially large; this guy allegedly made about $400k. I think back to the story of cyber crime generally. Some stylized accounts say that long ago, internet viruses or whatever were kind of a game that people sort of did for fun. Some people just liked causing damage or they just wanted to see what it was possible to do. There weren't super easy ways to make a bunch of money with it. It certainly wasn't non-existent, but there were genuine, significant frictions. Then, when crypto made it vastly easier to extort folks for real money from the other side of the world, it took off on industrial scales.

In some sense, I feel a bit of that here. People getting in trouble for bad use of their access to classified information obviously isn't a new problem. Folks have been doing it because of a girl they like or because they decided they now believe in some other government/social or political movement/whathaveyou more than their promises to their own government. Maybe some folks even just found it fun. There was at least the one guy who posted classified information in the forums of a video game, because he wanted the US tanks in the game to be stronger. Foreign governments have long been trying to monetize this, as well, paying handsomely for information provided by insiders. But that path to money is kind of hard and cumbersome. You have to find some legit way to contact some component of the foreign government, possibly build a relationship, etc. Now, there are big piles of money, just sitting there, ready to be taken, and my guess would be that it's probably easier for folks to think that they can figure out how to cover their tracks while they bank a bunch of money this way. Many of them might actually be wrong, be bad at covering their tracks, and get caught. Others might succeed, and I have no real sense for how much this phenomenon will grow.

I think the one I'm remembering might have been a different one that came out later, but yeah, probably similar. There is, of course, a wide range of estimates, depending on model details.