ControlsFreak
No bio...
User ID: 1422
You might notice that neither side in my example scenario had any political descriptors attached.
Would you, personally, use the form of reasoning you're describing and come to the conclusion that one or the other side in my example scenario is "Fascist Authoritarian"? If so, please describe how you used that reasoning to reach that conclusion.
...I hate to just ask again, but, uh, how high have you tried? My general belief is that supply curves slope upwards.
If the board of a company fires the CEO, but he tries to lock the doors to the building and hole up inside, so the board calls the police and has him evicted ("at gunpoint"), does that make them "Fascist Authoritarians"?
at any reasonable wage
How high have you tried?
Only a couple minor responses, as I think we're mostly understanding each other.
this is the sense in which I don't see a reachable point where honesty and bargaining come to strictly dominate.
My only quibble is that I don't think we really need the "honesty and" part. The question really is whether, even with dishonesty, bargaining can be achieved.
As a note I do expect that bargaining frictions will be reduced, but the existential question is whether they will be reduced by a factor large enough to compensate for the increased destructiveness of a conflict that escalates out of control.
The weirdly good thing about the increase in destructiveness ("good" only in the narrow sense of bargaining and likelihood of war, not necessarily in general) is that this increases costs to both sides in the event of war. As such, it increases the range of possible bargaining solutions that keep the peace. Both factors (this and the reduced bargaining frictions) should decrease the likelihood of war.
Verily, it seems like things are proceeding about as I predicted over a year ago. I pointed out in a parent comment that the #Resisters suffer a coordination game problem, and they lack any clear object to coordinate around. There is unlikely to be a singular event that causes all of the resisting bureaucrats to all simultaneously stick their necks out to create a large conflagration where they plausibly have more resources and power that they can bring to bear than the President. Instead, when USIP tries to resist, other bureaucrats sit on the sidelines and watch, perhaps wondering what will happen to them or if they can come up with a plan on their own. But they will not rush to allocate some alternative police force to protect USIP HQ. The head of USIP basically has to decide whether he or she is going to, on his/her own, resist and refuse to let the President's political will prevail.
...but most like I predicted, if and when it comes to the point of, "We're not going to let you into the building," the President can clearly muster the raw force of boots to force the issue. There is roundable-to-zero chance that USIP's paltry security team is going to muster enough force or start shooting bullets. This just isn't the way that the war with the bureaucracy will be fought. If an agency pulls a minor stunt to not let them into the building, the President can and will have his team show up with a very minor show of force, and that will basically be the end of that form of resistance.
Of course, they will take it to the courts, and there, battles can go different ways. Different agencies have different statutes passed by Congress, and different particular legal battles may be resolved in different ways. For the most part, the primary questions are going to revolve around the judiciary, to what extent the executive complies, on what timescales, etc. We see that playing out in other domains. "Some silly bureaucrats think they can #resist by just locking the doors to the building," was never a plausible path.
Postscript. Matt Levine sometimes talks about the question of, "Who really controls a company?" Often, this comes up for him in battles between CEOs and boards, where they're like both trying to fire each other. Similarly, there are about zero successful attempts of the type "he had the keys to the building, so he locked the doors". However, he notes that sometimes, things like, "He's the only one who has the passwords to access their bank accounts," or whatever, tend to be more annoying. Sure, you can eventually go through the courts and get them to order the bank to turn control over to whoever, but banks are reluctant to take that sort of action on their own without a court involved. Obviously, situations like, "They hold the only keys to MicroStrategy's vault of Bitcoin or the encrypted vault that contains their core product," or whatever may be even more contentious. Fun to think about sometimes, but yeah, "We locked the physical doors," is basically never a viable strategy.
I mean, I kinda get your point that it's the way that he thinks about it, but he also says that it gives us straightforward bounds:
A paperclip-maximizing superintelligence is nowhere near as powerful as a paperclip-maximizing time machine. The time machine can do the equivalent of buying winning lottery tickets from lottery machines that have been thermodynamically randomized; a superintelligence can’t, at least not directly without rigging the lottery or whatever.
But a paperclip-maximizing strong general superintelligence is epistemically and instrumentally efficient, relative to you, or to me. Any time we see it can get at least X paperclips by doing Y, we should expect that it gets X or more paperclips by doing Y or something that leads to even more paperclips than that, because it’s not going to miss the strategy we see.
So in that sense, searching our own brains for how a time machine would get paperclips, asking ourselves how many paperclips are in principle possible and how they could be obtained, is a way of getting our own brains to consider lower bounds on the problem without the implicit stupidity assertions that our brains unwittingly use to constrain story characters. Part of the point of telling people to think about time machines instead of superintelligences was to get past the ways they imagine superintelligences being stupid. Of course that didn’t work either, but it was worth a try.
So, I guess, like, think about the best possible plans you could come up with to put some error bars on the expected value of war. Perhaps notice that political scientists don't just ask the question, "Why is there war at all?" (...coming up with the answer involving bargaining frictions...) but also the question of why war is actually still somewhat rare, especially if we think about all of the substantive disagreements there are out there. They point out that the vast majority of wars that are started actually end surprisingly quickly, often as some information is learned in the process, a settlement is quickly reached. Superintelligences are going to be wayyyyyyyyy better at driving down those error bars and finding acceptable settlements.
This is where I'm appealing to things like the >90% draw rate in computer chess (when the starting positions are not specifically biased).
I think that's a fact particular to chess - I don't expect the same result in computer Go / othello / some other game that is less structurally prone to having draws.
I guess it's not the draws, themselves, that are "the thing". Let me try to put it another way. One of the top GMs in the world made a comment not too long ago about their experience working with very powerful computers. He said something along the lines of, "With the computer, it's always either zeros or winning." That is, he basically viewed it as that once you have enough computanium, for many many many positions, either the computer sees a way to essentially just straight equalize or it can see out to a win. Now, obviously, this is not strictly true, and it's obviously not true in all positions, as you get closer to the start of the game. But they can see the expected outcome sooo vastly better than we can. In the same way that people want to blow up that ability to things like "can engage in warfare sooo vastly better than we can", it should also blow up their ability to see expected outcomes and come to negotiated settlements sooo vastly better than we can.
I don't see how improvement in those models means that there is a reachable point where winning strategies switch from being based on deception and trickery to being based on cooperation stemming from mutual knowledge of each others' strategies
The attempted resolution in the financial markets paradox is that people just stop investing in more information. Could they double down on deception and trickery? Perhaps. But that seems like an unlikely result, game-theoretically. "Babbling equilibrium" or "cheap talk" are sometimes invoked, depending on the specific formalization. There are others that aren't in that wiki article. I could walk through a bunch of different models for how humans try to deal with deception and trickery in different domains. Presumably a superintelligence will know all of them and more... and execute even better in implementing them. It took me a long time to realize this, but when you think of deception and trickery as part of the strategy set, then the correct game-theoretic notion of equilibrium is not necessarily "cooperation stemming from mutual knowledge of each others' strategy", but "the appropriate equilibrium stemming from mutual knowledge of each others' strategy, which may contain deception and trickery, and you are each reasoning about the other's ability to engage in deception and trickery, the value the other may obtain from such, etc." Of course I know that my opponent may try deception and trickery, so I need to reason about it. A superintelligence will reason about it even better. Probably the easiest thing to think about here is again the game Diplomacy.
Where the mere game of Diplomacy differs from actual war in the real world is that we have good reason to believe that the costs of engaging in war are much much much higher, so we have a very big bargaining range, and we need quite significant bargaining frictions to get in the way. I still don't see how a superintelligence doesn't reduce the bargaining friction.
For the record, you don't have a problem with me. You have a problem with the people who hold the position that we are approaching an AI singularity and that doom is inevitable because the AI will have all these incredible characteristics. I don't actually hold that position; I'm just investigating it.
In any event, I again don't think it needs to be actually omniscient. It just needs to be able to reduce error bounds enough to eliminate the bargaining friction. Since war is very costly, it certainly doesn't need to be perfect; it just needs to get the error bars down enough. Think of it as a continuum. As the ability to gather information, model, and predict accurately goes up, the likelihood of war goes down, since the bargaining frictions due to uncertainty are reduced. Yes yes, it may be only when we take the limit that the likelihood of war goes down to precisely zero. I'm not even quite sure of that, because since war is so costly, we can probably still tolerate a fair amount of uncertainty and still remain in a region where settlements can be negotiated.
The AI singularity/doom people think that, for all intents and purposes, we're headed for that limit. They may be wrong. But if one believes their premise, then I think the conclusion would be that war goes to zero.
How do you know?
I don't know! I'm just temporarily importing my understanding of the tenets held by the singulatarian doomerists. They seem convinced that there's nothing we can do, not militarily, not intelligence community, not nothing, to even hold a candle in comparison to how good it's going to be at executing. Presumably, a part of its ability to be so good is going to be understanding the world around it with significantly smaller error bars than we currently have. I don't think they even need it to be completely zero error bars; just that it's wayyyy better than ours. What I think is related is that we don't need to have perfectly zero error bars in order to avert war; we just need small enough error bars to overcome the bargaining frictions. Given the high costs of war, that seems pretty feasible.
I say the god AI will not exist.
This is sort of the crux. I happen to agree with you. The point of my comment was to investigate the tenets of a group of folks and see what the implications are. I think that if one adopts a position like in that Scott quote, then the implication is something like the end of war.
Can you name three people who would agree that they make this "widely" held prediction?
Probably not. I don't keep track of names of people. Obviously, there's Big Yud. I quoted Scott below. I'd have to wade further into those doomerist circles to get a third name, and meh.
The important question is whether they are effectively perfect executors compared to each other.
This is where I'm appealing to things like the >90% draw rate in computer chess (when the starting positions are not specifically biased). We also see something similar in the main anti-inductive system that I'm making comparison to - financial markets. At one point, I had heard that an offhand estimate of how long a good trading idea lasts before it's discovered and proliferated is like 18 months. The models just keep getting better.
If you pit two top engines against each other, you won't have any idea who will win. You know it'll be a coin toss but you won't know who will win.
Emphasis added. I don't need to know in order for the AI to tell me that the best outcome is a negotiated settlement within certain parameters.
the opponent's moves are still unknown.
Agreed, but sort of irrelevant. The chess engine is still executing perfectly, even though it doesn't actually know what moves the opponent will ultimately make.
Playing a game well is one thing, but solving a game (determining if a player can force a win) is entirely harder. Checkers, tic-tac-toe, and connect four are solved, while chess is not.
I think the answer here is again that it is ultimately irrelevant. We didn't need to solve chess or diplomacy to have an engine become a nearly perfect executor or to narrow the range of outcomes significantly (>90% draws unless you extremely bias the starting positions, for example).
War is about using force to achieve a political goal.
That would be the substantive disagreement part. Classical theory says that that's not enough for war. You also need a bargaining friction, otherwise, you'll get a negotiated settlement.
Nice find!
I quoted Scott below, but yes, everyone in the Big Yud singularity doomerist community. My post is taking one of their tenets seriously and seeing the implications. My sense is that they won't be particularly happy with such implications. Of course, part of the bit is exposing that many many people don't believe their tenets, surfacing that disagreement, with a clear application of how it contrasts with their other claims.
I think the response would be that you don't need arbitrary precision. You just need enough to get within a pretty wide range of bargaining solutions. That may be doable at a higher level of abstraction, and a perfect executing AI can find that proper level of abstraction.
Of course, this process might not even look like finding the right level of abstraction to our eyes. In chess, grandmasters sometimes look at computer moves, and they struggle to contextualize it within a level of abstraction that makes sense to them. Sometimes, they're able to, and they have an, "OHHHHHHHH, now I see what it's saying," even though it's not "saying".
If there is value in weeding out the bullshit, the omniscient AI will weed out the bullshit. AI already plays diplomacy, trying to weed out bullshit. Just increase the scale. The best bullshitting Diplomacy players will be mere Magnus Carlsens against it. The Chinese AI and the American AI will both compute all the way out to the draw, just like the TCEC.
I think you're doing the thing where you haven't internalized "the thing". From Scott:
Consider weight-lifting. Your success in weight-lifting seems like a pretty straightforward combination of your biology and your training. Weight-lifting retains its excitement because we don’t fully understand either. There’s still a chance that any random guy could turn out to have a hidden weight-lifting talent. Or that you could discover the perfect regimen that lets you make gains beyond what the rest of the world thinks possible.
Suppose we truly understood both of these factors. You could send your genes to 23andMe and receive a perfectly-accurate estimate of your weightlifting potential. And scientists had long since discovered the perfect training regimen (including the perfect training regimen for people with your exact genes/lifestyle/limitations). Then you could plug your genotype and training regimen into a computer and get the exact amount you’d be able to lift after one year, two years, etc. The computer is never wrong. Would weightlifting really be a sport anymore? A few people whose genes put them in the 99.999th percentile for potential would compete to see who could follow the training regimen most perfectly. One of them would miss a session for their mother’s funeral and drop out of the running; the other guy would win gold at whatever passed for this society’s Olympics. Doesn’t sound too exciting.
A team sport like baseball or soccer would be harder to solve. Maybe you’d have to resort to probabilistic estimates; given these two teams at this stadium, the chance of the Red Sox winning is 78.6%, because the model can’t predict which direction some random air gusts will go. I guess this is no worse than having Nate Silver making a betting model. But on the individual level, it’s still a combination of your (well understood) genes and (well understood) training regimen.
Hedge funds already have some of the best weather models in the world. There's alpha there right now. Or at least there was; I don't know how much has been anti-inducted away. The god AI will certainly be able to do at least as well. It will probably make our current best models look like a mere Magnus Carlsen. And if there's alpha in taking a more minute view, scoping the model in to a particular stadium, why can't it do that? Where there is alpha in the AI getting information, the AI will go there and get the information. It will be able to massively reduce the error bars. And all you need to get rid of war is reduce the error bars enough to get to a negotiated agreement. There's tons of alpha there, so there they will go. Until that alpha has been anti-inducted away, and we're right back in the paradox.
SMBC gets this close.
I've been thinking about the Grossman-Stiglitz Paradox recently. From the Wiki, it
argues perfectly informationally efficient markets are an impossibility since, if prices perfectly reflected available information, there is no profit to gathering information, in which case there would be little reason to trade and markets would eventually collapse.
That is, if everyone is already essentially omniscient, then there's no real payoff to investing in information. I was even already thinking about AI and warfare. The classical theory is that, in order to have war, one must have both a substantive disagreement and a bargaining friction. SMBC invokes two such bargaining frictions, both in terms of limited information - uncertainty involved in a power rising and the intentional concealment of strength.
Of course, SMBC does not seem to properly embrace the widely-held prediction that AI is going to become essentially omniscient. This is somewhat of a side prediction of the main prediction that it will be a nearly perfectly efficient executor. The typical analogy given for how perfectly efficient it will be as an executor, especially in comparison to humans, is to think about chess engines playing against Magnus Carlsen. The former is just so unthinkably better than the latter that it is effectively hopeless; the AI is effectively a perfect executor compared to us.
As such, there can be no such thing as a "rising power" that the AI does not understand. There can be no such thing as a human country concealing its strength from the AI. Even if we tried to implement a system that created fog of war chess, the perfect AI will simply hack the program and steal the information, if it is so valuable. Certainly, there is nothing we can do to prevent it from getting the valuable information it desires.
So maybe, some people might think, it will be omniscient AIs vs omniscient AIs. But, uh, we can just look at the Top Chess Engine Competition. They intentionally choose only starting positions that are biased enough toward one side or the other in order to get some decisive results, rather than having essentially all draws. Humans aren't going to be able to do that. The omniscient AIs will be able to plan everything out so far, so perfectly, that they will simply know what the result will be. Not necessarily all draws, but they'll know the expected outcome of war. And they'll know the costs. And they'll have no bargaining frictions in terms of uncertainties. After watching enough William Spaniel, this implies bargains and settlements everywhere.
Isn't the inevitable conclusion that we've got ourselves a good ol' fashioned paradox? Omniscient AI sure seems like it will, indeed, end war.
They're not backyard-maintainable, but nor are modern ICEs.
My sense is that the Toyota Dynamic Force engines are still mostly backyard-maintainable (there will always be a question of level of effort as well as some specific sub-systems), and they're pretty darn efficient. Seems they got there with just good old fashioned design optimization and only a couple additional computer-controlled subsystems.
It's not clear exactly what is happening on what sources of dollars; there are a bunch of different numbers in the article, and they're mostly unattached to any particular mechanisms. It may be only $400M out of $5B. It's not clear if it's just some funding agencies or some other criteria. My guess from the following sentence is that it's currently just some funding agencies:
The Departments of Education and Health and Human Services plan to immediately issue stop-work orders on grants to the school, the task force said.
That would make sense, as DoE/HHS are a very small part of federal research funding.
One thing to note is that a "stop-work order" is a particularly harsh tool. Rather than simply defunding the agencies, so that there simply aren't new grants to go around (and no one knows how they can change behavior to improve the situation), a stop-work order says that the university must completely stop doing anything related to an existing grant. They certainly can't spend any of the money, not even on grad student salaries. It must grind to a halt.
I have heard about this sort of thing happening before. Back when the gov't started getting serious about China's influence in academia, they started requiring a bunch of disclosures about China-related stuff. Apparently, one guy at one university screwed up badly enough that they issued a stop-work order to everything the university did with their federal funding until they could sort everything out. At the same time, they were even prosecuting professors if they weren't disclosing. The message was clear that the gov't took this stuff seriously, and if anyone screwed up, then everyone, at the institutional level, paid the price. As I put it here, that makes the game theory pretty easy. If you're a top tier talent, you can't afford to FAFO with some university that can't get it together at an institutional level, no matter what else they might offer you.
Of course, right now, this seems to be limited just to antisemitism (and so far, just Columbia) rather than extending to further bad behavior in academia. I, of course, proposed doing this type of thing for when a university, at an institutional level, does basically anything that discriminates on the basis of race/gender (and I got a lot of downvotes here for saying that such a plan was way better then indiscriminate "chemo", just shutting stuff down randomly with no incentive for changing behavior). Maybe it'll come, and this is just the trial balloon. It could make sense to start with one that is over-the-top egregious. Even Scott Aaronson, who is famously over-the-top performative anti-Trump, went with this:
For the past year and a half, Columbia University was a pretty scary place to be an Israeli or pro-Israel Jew—at least, according to Columbia’s own antisemitism task force report, the firsthand reports of my Jewish friends and colleagues at Columbia, and everything else I gleaned from sources I trust. The situation seems to have been notably worse there than at most American universities. ... Last year, I decided to stop advising Jewish and Israeli students to go to Columbia, or at any rate, to give them very clear warnings about it. I did this with extreme reluctance, as the Columbia CS department happens to have some of my dearest colleagues in the world, many of whom I know feel just as I do about this.
He also sort of grudgingly accepted some game theory:
Time for some game theory. Consider the following three possible outcomes:
(a) Columbia gets back all its funding by seriously enforcing its rules (e.g., expelling students who threatened violence against Jews), and I can again tell Jewish and Israeli students to attend Columbia with zero hesitation
(b) Everything continues just like before
(c) Columbia loses its federal funding, essentially shuts down its math and science research, and becomes a shadow of what it was
Now let’s say that I assign values of 100 to (a), 50 to (b), and -1000 to (c). This means that, if (say) Columbia’s humanities professors told me that my only options were (b) and (c), I would always flinch and choose (b). And thus, I assume, the professors would tell me my only options were (b) and (c). They’d know I’d never hold a knife to their throat and make them choose between (a) and (c), because I’d fear they’d actually choose (c), an outcome I probably want even less than they do.
Having said that: if, through no fault of my own, some mobster held a knife to their throat and made them choose between (a) and (c)—then I’d certainly advise them to pick (a)! Crucially, this doesn’t mean that I’d endorse the mobster’s tactics, or even that I’d feel confident that the knife won’t be at my own throat tomorrow. It simply means that you should still do the right thing, even if for complicated reasons, you were blackmailed into doing the right thing by a figure of almost cartoonish evil.
This is what I have been saying. Use the tools that you have. Don't use them indiscriminately. Don't imagine that you're doing chemotherapy in just randomly attacking everything. Tailor them specifically to very very clearly change the incentives so that universities need to change at an institutional level and that if they don't, individual talent has a huge incentive to just leave them.
Now, of course, one always has to worry a bit about how when something is done by the stroke of a pen, it can be reversed by the stroke of a pen of the other guy (or an equal and opposite "Dear Colleague," letter). But solutions to that problem are much harder to come by.
according to the github repo, that RL approach also gets to Cerulean City
Huh. I hadn't looked at it in a while. What I recall from when I last looked was that they were completely stuck on Mt. Moon. Looks like the repo, itself, doesn't contain the latest. They claim to have actually finished the game as of last month. Of course, there are a lot of caveats. They did significant simplification of the game, are using a bunch of scripts to provide extra guardrails, super detailed reward shaping (sometimes an ad hoc specific reward for a particular level), etc. The one that stood out to me originally is still there; they simply skip the entire first part of the game, because it was just failing, conceptually, to even get its first Pokemon or get the parcel to Prof. Oak or something (I don't remember exactly). It couldn't even get off the ground, so they literally just skip it.
These sorts of exercises are always delicate to comprehend, because it's almost never easy to figure out how to think about the unique ways they're inserting human expertise. Does it matter that mayyyybe it only needs a liiiiittle help on something that is bone stupid easy for a human, but incomprehensibly hard to whatever iteration AI we have? Will that get papered over suddenly in two months by some other method? Or is it just a limited tool that, with plenty of human expertise, guidance, and delicacy, can still do pretty phenomenal things? Who knows?
We are seeing a lot of pieces, and it's easy to imagine that if we just glue the right general-purpose version of those pieces together in the right way, it'll really soar. But there are still a lot of ways to be skeptical, and they happen to be concepts that are kind of hard to think about, given that we lack a really appropriate theory structure.
My reaction was that this is just about like most other advances in ML - simultaneously really cool/impressive and hilariously bad. It is genuinely really cool and impressive that it's done as well as it has. Someone on YouTube took a more pure RL approach a few years back, and it failed suuuuuper hilariously badly (in beautifully hilarious ways). Claude has definitely done better, and that's pretty legit, given that the core of it was trained to be an LLM, not to play video games. But one of the most true statements about ML still seems to hold true: "It's great when you want to model something where you don't know how to describe the underlying structure... and you're okay with it being hilariously wrong some percentage of the time." Some might think that the percentage of time that it's hilariously wrong is just a little bit too high, and it won't even need to drop that much before it works out pretty decently.
It's not surprising that it needs some scaffolding. The Bitter Lesson Believers will always believe in their hearts that they can eventually drop the scaffolding, and maybe they'll be able to, sometimes. But most of the big advances we've had in ML are because we've exploited some sort of structure in the world. And the most killer applications are where we have very good feedback in a very structured fashion (e.g., tree structures in board games, math/coding engines, etc.).
It definitely puts a damper on any predictions that AI is going to ingeniously conquer the world later this year, but as you project further and further into the future, it's always a matter of, "It's difficult to make predictions, especially about the future."
Which paragraphs are these?
The first examples are the ones I already cited, with blockquotes.
Fong Yue Ting
I would definitely bin these under the category of being just, "Yeah, dude's obviously getting deported." But let's take a look at a few pieces of the opinion of the Court. The syllabus begins with a banger:
The right to exclude or to expel aliens, or any class of aliens, absolutely or upon certain conditions, in war or in peace, is an inherent and inalienable right of every sovereign nation.
The opinion basically begins by citing Nishimura Ekiu v. United States:
"It is an accepted maxim of international law that every sovereign nation has the power, as inherent in sovereignty, and essential to self-preservation, to forbid the entrance of foreigners within its dominions, or to admit them only in such cases and upon such conditions as it may see fit to prescribe...
Then, citing Chae Chan Ping v. United States:
"Those laborers are not citizens of the United States; they are aliens. That the Government of the United States, through the action of the Legislative Department, can exclude aliens from its territory, is a proposition which we do not think open to controversy. Jurisdiction over its own territory to that extent is an incident of every independent nation. It is a part of its independence. If it could not exclude aliens, it would be, to that extent, subject to the control of another power... [emphasis added]
That is, there is actually something lost in terms of jurisdiction if they are not able to exclude aliens. That would be very strange if such individuals are "subject to the jurisdiction thereof". What about the whole hullabaloo about whether you can call it an "invasion"? The Court cites Knox v. Lee to basically say that this question doesn't matter:
If, therefore, the Government of the United States, through its Legislative Department, considers the presence of foreigners of a different race in this country, who will not assimilate with us, to be dangerous to its peace and security, their exclusion is not to be stayed because at the time there are no actual hostilities with the nation of which the foreigners are subjects. The existence of war would render the necessity of the proceeding only more obvious and pressing. The same necessity, in a less pressing degree, may arise when war does not exist, and the same authority which adjudges the necessity in one case must also determine it in the other.
They cite various "commentators on the law of the nations":
"The Government of each State has always the right to compel foreigners who are found within its territory to go away, by having them taken to the frontier. This right is based on the fact that, the foreigner not making part of the nation, his individual reception into the territory is matter of pure permission, of simple tolerance, and creates no obligation... [emphasis added]
Shades of "implicit license". Again, the Congress has given no permission, no license, for them to be here at all. Actually, a bit more on licences:
Whatever license, therefore, Chinese laborers may have obtained previous to the act of October 1, 1888, to return to the United States after their departure is held at the will of the Government, revocable at any time at its pleasure...
and
In view of that decision, which, as before observed, was a unanimous judgment of the Court, and which had the concurrence of all the Justices who had delivered opinions in the cases arising under the acts of 1882 and 1884, it appears to be impossible to hold that a Chinese laborer acquired, under any of the treaties or acts of Congress, any right, as a denizen, or otherwise, to be and remain in this country except by the license, permission, and sufferance of Congress, to be withdrawn whenever, in its opinion, the public welfare might require it.
It really seems that illegal aliens simply lack any licence, implied or otherwise. Of course, if they are permitted, then they are subject to the laws:
By the law of nations, doubtless, aliens residing in a country with the intention of making it a permanent place of abode acquire, in one sense, a domicile there, and, while they are permitted by the nation to retain such a residence and domicile, are subject to its laws and may invoke its protection against other nations.
and
Chinese laborers, therefore, like all other aliens residing in the United States for a shorter or longer time, are entitled, so long as they are permitted by the Government of the United States to remain in the country, to the safeguards of the Constitution, and to the protection of the laws, in regard to their rights of person and of property, and to their civil and criminal responsibility. [emphasis added]
Again, what if they are not permitted or licenced? Are they then entitled to the safeguards of the Constitution and so forth? The implication sure seems to be no.
So yeah, my read of that opinion is that it's basically just, "Yeah, dude's obviously getting deported." And moreover, it reaffirms that aliens need some sort of permission or license to be here (implicit or otherwise), without which, it's not even clear that we can even say that they are entitled to any of the safeguards of the Constitution (much as you and I might want it to be otherwise), much less that they are considered subject to the laws or jurisdiction even if they were so entitled.
Why would the Wong Kim Ark Court even consider the question of whether blatantly illegal aliens were some special class of exemptions in a Constitutional protection when they had already linked to prior precedent that essentially said that they were categorically ineligible to appeal to any sort of Constitutional protection whatsoever?
Ok, great. Glad to know that you would not be able to conclude that either side in the example scenario is a "Fascist Authoritarian". Now hopefully we'll find out what @WandererintheWilderness thinks we can conclude.
More options
Context Copy link