@gattsuru's banner p

gattsuru


				

				

				
11 followers   follows 0 users  
joined 2022 September 04 19:16:04 UTC
Verified Email

				

User ID: 94

gattsuru


				
				
				

				
11 followers   follows 0 users   joined 2022 September 04 19:16:04 UTC

					

No bio...


					

User ID: 94

Verified Email

While people do overstate the difference, solar nameplate and grid-scale storage nameplate and practically usable values are not identical. I'm pleasantly surprised by the growth of solar, as someone who was genuinely very pessimistic in the late 00s, but there's still a number of limitations to the technology.

On the flip side, a ton of the financial limitations to nuclear power are regulatory, and often regulations established by people who explicitly want to smother nuclear power completely. It'll require some uptooling to bring down costs, but there's a massive amount of low-hanging fruit. I like the SMRs for a variety of technical reasons, but even 300-800 MW plants are really not the sort of thing that should take decades to construct, and that time component is what absolutely murders the financial model.

Now, there are limits to the technology -- just as solar can't beat nuclear for baseload capacity, nuclear's near-uniquely bad for peaking power. I don't think nuclear can or should displace most renewables, and I'd be surprised if they all together can completely displace LNG peaker plants in the next couple decades. But there are reasons beyond politics to argue for them both.

And, of course, just as Biology is Mutable argued, just because the problem is political doesn't mean it's solvable. It may be that there's no way to get those anti-nuke nuts out from regulations, or the only way to do so is extraordinarily costly.

No, you're right, I'm wrong, and Reuters didn't goof this one. Sorry, that's embarrassing, and what I get for trying to do this sorta check on a cell phone. Correct person had one charge for dealing crack cocaine.

Hasn't pardoned Reality Winner (which I did expect four years ago) or that IRS leaker (which would unpleasantly surprise me, yet, growth mindset), so there's still some downhill to go. But there's another couple hours left to slide down the slippery slope.

I'll also add to the extent that media coverage of 'normal' pardons is obfuscating things:

The other people pardoned include Darryl Chambers, a gun violence prevention advocate who was convicted of a non-violent drug offense, immigration advocate Ravidath “Ravi” Ragbir, who was convicted of a non-violent offense in 2001, the White House said in a statement.

They are, unsurprisingly, also strong political advocates for the President's (aides') political positions, but they're also separately testing the limits of Scott Alexander's 'media doesn't lie' spiel.

> Following a jury trial in January 1995, Defendant Darrell Chambers was convicted of several counts: Count 1, continuing criminal enterprise, in violation of 21 U.S.C. § 848; Count 2, conspiracy to distribute cocaine, in violation of 21 U.S.C. § 846; Counts 4 and 6, false statements to institution with deposits insured by the FDIC and aiding and abetting, in violation of 18 U.S.C. §§ 1014, 2; Counts 8 and 9, laundering of monetary instruments and aiding and abetting, in violation of 18 U.S.C. §§ 1956, 2; Count 10, attempted possession with intent to distribute cocaine and aiding and abetting, in violation of 21 U.S.C. §§ 846, 841(a)(1) and 18 U.S.C. § 2; Count 11, false statement in connection with the acquisition of a firearm, in violation of 18 U.S.C. § 922(a)(6); Count 12, possession with intent to distribute cocaine and aiding and abetting, in violation of 21 U.S.C. § 841(a)(1) and 18 U.S.C. § 2; and Count 13, felon in possession of a firearm, 18 U.S.C. § 922(g). See ECF 220, PgID 161-162.

I don't have high expectations for Reuters, but I would hope they were able to count.

As a metaphor, from Ames v. Ohio:

Title VII of the Civil Rights Act of 1964 bars employment discrimination against "any individual"—itself a phrase that is entirely clear—"because of such individual's race, color, religion, sex, or national origin[.]" 42 U.S.C. § 2000e-2(a)(1). Thus, to state the obvious, the statute bars discrimination against "any individual" on the grounds specified therein. Yet our court and some others have construed this same provision to impose different burdens on different plaintiffs based on their membership in different demographic groups. Specifically—to establish a prima-facie case when (as in most cases) the plaintiff relies upon indirect evidence of discrimination—members of "majority" groups must make a showing that other plaintiffs need not make: namely, they must show "background circumstances to support the suspicion that the defendant is that unusual employer who discriminates against the majority." Zambetti v. Cuyahoga Cmty. Coll., 314 F.3d 249, 255 (6th Cir. 2002) (cleaned up) (quoting Murray v. Thistledown Racing Club, Inc., 770 F.2d 63, 67 (6th Cir. 1985)).

To be fair, SCOTUS is hearing this matter on appeal in February.. To be less naive, I included those very long citations because Murray v Thisledown dates back to 1985, aka over forty years of Some People Are More Equal before SCOTUS might slap their wrists.

I'd point to the Climate Kids case as an example of how badly some judges do want to jump not just on random bullshit that helps their side, but specifically whatever framework needs legitimization at the time.

I think you're right that a lot of the Tribe-thinkers genuinely think they can just force it through by meme magic and lawschool paper bullshittery, and they're not even wrong in every case, but that looks a lot like forking the constitution in practice.

It's believable, and reflects badly on Musk if true.

Which makes it a little awkward when Harris has to interrupt his 'trust me bro' with 'oh, and that Triggernometry episode that keeps getting thrown around tots is being misportrayed, don't believe your lying eyes'. I don't follow Harris, so maybe his interpretation was right, or that speech was a one-off and he's spent the last four years trying to bend over backwards to admit that he was wrong then. But it makes a surprise reveal of a three-and-a-half-year-old need a little more proof than 'it's believable, and reflects badly on Musk if true'.

((And that Harris can't make it through a substack piece of this short length without dumb asides like the Gaetz comparison, leaves me skeptical that he is trying to bend over backwards.))

For some examples of degrees of System Stuff:

  • Forge of Destiny is low-System, to the point where most people who don't like system stuff will tolerate it. Magic in this world has some level of predictable rules and understandable paths or levels of progression, but these exist only in the sense that we have concepts like evaporation or atoms. At their most game-like, you'll find this level of writing has 'ranks', where a higher-rank character will rarely lose to one of a lower rank, and almost never to one several ranks below, but they're still closer things like martial arts black belts.
  • The Broken Knife is more moderate. Rules in-setting are not merely physics, but reflect and revolve around extremely well-defined roads of progression, though they're still not too overtly game-like. There are few or no explicit stats, but characters spend no small amount of time considering conceptual visualizations of various attributes.
  • Ar'Kendrithyst is a more central example, where there are explicit stats and Blue Boxes, but they aren't the only way to interact with the universe. Ar'Kendrithyst has a Watsonian explanation as a tool where the System was created to simplify access to more 'wild' magic, but sometimes these are just the fundamental laws of physics (such as in Edge Cases cw: stubbed), or they're available to 'only' the viewpoint character because of how they interact with magic (eg The Heart Grows).
  • Delve (cw: low update rate) is at the higher end, where not only is there group of in-universe designed processes giving various game-like user interfaces to people, and actual chapters focus heavily on optimizing for specific stat numbers, but the overarching plot is about understanding and fixing the system and its hidden knowledge. Intelligent Design is another extremely-high System work: while it's better than most.

Even good high-systems works tend to end up with pages or even chapters full of random blue boxes that will get your eyes to gloss over; bad System-focused works tend to end up feeling like watching someone grind an MMO, often in full murderhobo mode.

There's also just kinda a mess when the question of what extent Trump was actually involved.

Contemporaneously in 2021, there were quite a lot of allegations that Trump and close associates were deeply integrated into the planning and execution phases of the riot itself, to the point where the Trump campaign was supposedly giving out pdfs with specific movements to specific individuals. Into the next couple years, we had people testify that he got into John Wick-esque battles with the Secret Service, that he'd called off military assistance to capitol officers. It's still possible that's the case!

But what's actually been proven is that he gave a pretty dumb speech, and his campaign authorized the 'alternative' 'fake' elector slates, and he called the governor of Georgia. I'd argue that this is an impeachable offense, and that in a sane impeachment hearing we could start pulling at the threads of those more serious allegations to see if any justify a conviction. They didn't -- the impeachment hearings were a political joke even by the low standards set by recent competition -- and as a result we never even got to the question of whether such a conviction would be justified or whether such a conviction should include future prohibition from office.

I don't know whether his involvement genuinely ended with the speech, but if some there's a lot of underpants gnomes involved between that speech and the riot itself.

Some states have a distinction of 'primary' and 'secondary' traffic offenses: primary offenses can be justification for a police stop and citation at any time, while secondary offenses can only be issued where a stop has already begun under reasonable suspicion of a primary offense, or another citation is already being issued.

Depending on offense, the theory is either that the secondary offenses are intended to augment other errors (eg, speeding a few MPH at night while your headlights are broken is much worse than speeding a few MPH), or that enforcement of the law while a vehicle in motion is so impractical that it would more often be used as justification for improper or illegal stops than for true enforcement (eg, you aren't going to be able to tell if a driver has buckled their seatbelt fully at 80 MPH on a freeway no matter how good your eyes are), or that the law is intended more as a guideline and it has been abused in the past (eg, pulling someone over for a single broken tail lights was notorious as a pretext for other searches, rather than an opportunity to tell people to get the light fixed).

That said, while New York has considered such a distinction, I don't know the state of the current law there or in Washington. And sometimes this is a policy thing, rather than a statute one.

I'm not sure on that -- it might well be more a symptom than a cause.

When Musk learned of his daughter's transition "he was generally sanguine," according to Isaacson, but after Jenna became a "fervent Marxist" she cut Musk out of her life.

When Jenna filed a request to change her name so that it would better reflect her gender identity, she wrote that she was also requesting a name change because "I no longer live with or wish to be related to my biological father in any way, shape or form."

Musk claims, according to Isaacson, that when his daughter "went beyond socialism to being a full communist and thinking that anyone rich is evil" she severed all communication and relations with him. Isaacson says the "rift pained" Musk "more than anything in his life since the infant death of his first child, Nevada."

Maybe that's the sanitized and self-deluding version, and it's not the frame Wilson takes -- putting cutting ties as a culmination of a distant parent -- but I think it's a little more plausible than most expect. I've seen more than a fair share of techie people where trans (or LGB) stuff will get a kid kicked out of the house, but there's also a lot of this story that I've also seen where individual stuff was just awkward until Everything Leftism put friction into everything.

There are an annoying number of shops that used to love Cisco's port security option, which will lock down an interface on a switch to a certain segmentation of MAC addresses (usually configured in adaptive modes). It's... not as unmanagable as it sounds, though it is very unmanageable and very much something that's usually only helpful against very specific threat models and when paired with a lot of other stuff.

How does it broadcast its request if it doesn't have an IP address?

DHCP requests are transmitted over UDP with a target destination of the broadcast address, usually 255.255.255.255. The standard says that this packet should have a source address of 0.0.0.0, but in my experience most DHCP servers aren't very picky about that. This packet is just a message going across a wire to every receiver on the local network (ie, up until the gateway), so the ethernet card doesn't need to have an IP address at that time. EDIT: for clarity, it uses the MAC address to identify itself and so the server can properly respond to just the correct machine. This is one of many reasons that getting DHCP to run across network boundaries is an absolute nightmare. /EDIT

The local network is defined by the network mask, right?

For the purposes of TCP/IP, the local network is defined by the netmask. Physical networks (eg, having multiple routers with different subnets plugged into the same big switch) and logical networks (VLANs) can and often are different. This is a space with a lot of namespace collision, so be wary of it.

So with 255.255.255.0 if I send something from 192.168.1.2 192.168.1.3 there's no need for the gateway to be set up...

At the risk of going too deep into the (lies-to-children!) OSI model:

Before doing anything else, the sending computer looks at its ARP table, which converts IP addresses to MAC addresses. If the destination IP address is not on the ARP table, it will send an ARP request, which is a broadcast message to the local network asking if any devices have that IP address (or, if not on the local network, it sends an ARP request for the local gateway). Once it finds the address, it inserts that IP-MAC pair into the ARP table, and uses it as part of the packet and frame shaping.

The computer forms a packet, with a source IP address of 192.168.1.2/24 and a destination of 192.168.1.3/24, at the TCP/IP network layer, or layer three. The ethernet card breaks this into one or more "frames" with a maximum size called the MTU (historically 1500 bytes, but can be larger where hardware supports it), aka the ethernet/MAC data link layer or layer two. It then transmits these frames as signals to the network switch, aka the ethernet physical layer or layer one.

This switch will receive the signals, and convert them into the layer two frame. On older hubs, it would simply echo the frame out every port. On modern switches, it then inspects the frame for a destination MAC address. If the switch has records of receiving frames with a source MAC address matching that destination, it only sends the frame to that specific physical port or ports. If it has no record, it floods the frame out every port, and it's up to the receiving device to filter whether it's address properly. But the switch tables get filed with records pretty quickly

((For older computers, there was a physical layer conversion issue; this is why crossover cables existed. But almost every modern device can automatically switch over.))

but 192.168.2.3 is outside the network and the packets will be routed to the gateway?

In that case, the frame would be configured with a destination MAC of the local gateway, so the switch would look in its MAC table for the MAC of the local gateway, and usually only send the packet to the physical ports of the local gateway. This is layer two switching, not layer three routing.

It's only when the frame gets to the gateway, which reassembles the frame into a packet to inspect the destination IP address, that the gateway examines what the target IP address is, and then routes it by checking its own routing tables and own default gateway.

DHCP server restarts can cause IP conflicts pretty often, especially if you're running the DHCP server on a small home/office router that doesn't persist state. Windows will specifically warn about the IP conflict, and newer (Win7+) will often try to automatically reregister with your DHCP server if you're not running in static modes; Linux has some optional standards-complaint IP conflict notifiers.

If not corrected, the usual results are inconsistent communication and higher network utilization: network switches will resolve the IP address to physical multiple ports, and this causes packets to be sent many more places than they need, and can sometimes cause TCP connections to go wonky.

((There are exceptions and sometimes even cases where you could use this behavior, but they were always rare and increasingly have been replaced by better solutions.))

I don't think you'll see a true 'dumb switch' (technical term 'hub') in ethernet from a major store; I haven't seen a new one since back when 10/100mbps switches were just phasing in. But they definitely existed, and it wasn't uncommon for one person to be able to bog down an entire intranet.

In the modern day, the distinction between 'dumb' and 'smart' switches is usually going to emphasize 'smart' switches as having optional routing functionality, (aka 'layer 3 switching'). This technically means that the layer 3 switch has one or more ports that can be configured into a router mode, though in practice it'll be missing a lot of other functionality you'd expect from a small home or office router (almost always missing NAT/PAT, usually not having DHCP or DNS).

The base Arch install has gotten a lot better these days: if you're considering something like Manjaro or EndeavourOS, I'd really recommend just going straight to Arch with a list of desired programs. And it will teach you a lot about what's actually going on. But that's less because archinstall is superhumanly easy to use, and more EndeavourOS/Manjaro will let you get really far over your head if you can't or don't want to get into the real nitty-gritty of things.

It's a lot better as an option after you've already gotten enough experience with a more placid distro to know what you need to run first, though, so I really can't recommend it for new Linux users unless they've got a very specific use case.

For the most part, modern Linux problems with power management, regardless of distro, tend to revolve around putting the computer into the right sleep states, or powering down newer CPUs to just their e-Cores, rather than high idle utilization. Fancy systems like hypr will have more idle cpu utilization than minimalist ones like DSLinux, but on a mainstream processor from the last ten years it's going to be a wash.

As in the post KingOfTheBailey linked, I'll usually point to Linux Mint to most newer users. It's not hyperoptimized at any one thing, but it'll give you the most reliable on-boarding experience. Ubuntu, Pop!_OS, and just plain Debian are all other good options for most cases, and for people attached to the Windows/Mac UI paradigms, Elementary or Zorin will work.

Yes. The Mint installer also acts as a pretty good liveCD/liveUSB, so you can test out basic functionality without having to do an install at all, if you want to verify this for your specific hardware.

Most Linux distros fall into this behavior now -- even Arch has pretty good hardware support just with the absolute minimal install -- so I'm really recommending Mint more for its interface and new user experience.

The only gotchas I'll caution about for normal hardware:

  • For nVidia GPUs specifically, Mint will default to the fully open source ('noveau') drivers. These work well enough for casual use, but they are less performant than the 'nVidia Open' or proprietary drivers (or even the community-clone NVK). Mint has a specific driver manager tool that it will wave at you in the New User Experience screen, but you do have to click the button and reboot. AMD doesn't have this problem.
  • You may want to disable SecureBoot in your BIOS. Linux Mint can handle it fine, but Microsoft Updates have broken Linux installs using it in ways that made getting the data out hard before, and I wouldn't be surprised if they do it again.
  • If you want to dual-boot (which is good way to work!), installing Windows then installing Linux will result in an EFI partition that is 100 MB. This is probably too small. Easiest way to fix this is to use GParted from the liveUSB environment; it's available from either apt-get or from the package manager. You'd want this tool anyway if you're trying to migrate an existing full-drive Windows install so that the disk has two partitions, but it's a lot easier to modify EFI from Linux than from Windows. Won't always be an issue, but it's a lot easier to fix early rather than after you're comfortable with your system install.
  • WiFi and Bluetooth drivers can give rare problems. It's now more at the 'check if anyone has had a problem' rather than 'check if someone's gotten it to work' stage, but especially very old (>5 years) Realtek USB wifi is prone to annoying. Printers used to fall into this category, but in the last couple years I've found it better at handling printer drivers than Windows (uh, sometimes to an aggressive degree; one Mint laptop autoinstalled a business printer at a commercial airport I visited). Audio can sometimes require rare pipewire (audio service) restarts to unfuck it after a big config modification, though I haven't had that problem in around six months now.
  • Mint (for now) runs using X11, which doesn't handle different refresh rates on multiple monitors well, or HDR at all -- you can make it work, but it won't be pretty. You can switch to Wayland in Mint, but without the >550 nvidia drivers there's a serious flicking problem in some games. If you're running a 240hz HDR monitor for gaming and a 60hz monitor for reference material, it might be worth looking at something like Pop!_OS or KDE Plasma, both of which prioritize Wayland support a little higher. This will probably get fixed enough in the next year or so that Wayland becomes either default or an easy swap option in Mint, though.

The big problems tend to be about more specialized stuff: VR headsets (especially WMR headsets), sound mixer boards, drawing tablets. Or about specific software, especially commercial software that phones home regularly, like DaVinci Resolve, Photoshop, so on.

Fireworks (and camping fuel) are generally not impact-sensitive explosives: shooting a box of fireworks will almost never set them off, and rupturing a camping fuel container is more bad because of the future risk of a flame. Lipo batteries (famously) Don't Like being punctured, so those are more plausible, but they're also placed very specifically on the bottom of the car electric vehicles for a variety of reasons, and I am hard-pressed to considered the sort of gymnastics necessary to intentionally bust the battery while gargling a bullet.

Maybe if he had something like Tannerite, which is impact-sensitive and sometimes used in celebrations, but the video going around isn't (thankfully!) a tannerite explosion.

Anything's possible, but it's a really unlikely answer.

I'll second AliceMaz's writeup, albeit with the caveat that the server in question there was unusually large. Most long-lived servers usually only get in the high double-digits of regular players, and correspondingly a lot of social rules are more varied and sometimes superstition-like (eg where are hoppers acceptable is a surprisingly complex question).

The BlanketCon 2022 postmortem is more about the technical side of a fairly short-lived server, but you can kinda see the motions around what social rules were working under the hood at the time.

Unfortunately, a lot of good pre-YouTube era stuff was written up on the old minecraft forum, and has since evaporated for GPDR reasons. Most analysis these days are in video form on YouTube, and they're often made by people who conflate the technical build side or lets plays with the social rule one, or only include the social rulemaking by accident. And even on YouTube, a lot of multiplayer SMPs are either more Let's Plays or outright scripted events. Or are just 2b2t voyeurism: there's some social stuff to Nocom/Randar/whatever, but it's not really a social rule-making thing.

I'd be interested if anyone's more familiar. I can think of a half-dozen Oilfurance-style stories for the short-lived SMP server I helped with a few years ago, so it's kinda surprising that nothing else is showing up on Google, but I dunno if anything I've got would be interesting enough for other people.

In addition to the matters SteveKirk brings up, I'd check what resolution YouTube is streaming at. There's been some changes in the last few months resetting default resolution values, and it'll quite often favor resolution values that you neither need nor want on many systems. 480p or even 360p is a lot easier on your processor and bandwidth if you're not reading text or looking at fine details of the video.

Officially, normal Win10 support ends October 2025, and Windows 10 Enterprise LTSC support ends in January 2027. There’s a one time offer of Extended Support, but it’s thirty bucks per station and gonna be pretty limited. I would expect some limited security updates after that despite Microsoft’s best promises and it’s certainly possible Microsoft does a last-second extension, but it’s not a lot of time for migration prep if you’re worried about 11.

Linux-in-ChromeOS is not awful, though it's very limited and you typically find the limits of the ChromeBook hard disk just in built-in-minimal-software.

For fully ripping out ChromeOS and replacing it on actual ChromeBooks, support problems can be as deep as the bootloader, firmware, and even CPU. Some are supported well-enough, but if your device is not on the mrchromebox or chrultrabook lists, getting out of ChromeOS can range from 'research project' to 'science project' to 'not gonna happen'.

If you're willing to buy a new ChromeBook specifically to convert to Linux, your options are better, but they're still going to have to be selective and do your research. In general, ARM is a ton of work to end up with a machine that may not be able to run a lot of apps (or require compiling them from source... for days), and AMD processors can have weird gaps in support or require very specific kernel versions. But I've mostly avoided it outside of a couple science projects; you're probably better off asking someone more focused, there.

I guess maybe the Venn diagram of the people who want super low end hardware and the people who are techy enough to dive in with Linux is extremely small?

A lot of it's that it's a fairly small field, and that people in it tend to be very focused and not very price sensitive. You can find a lot of not-powerful Linux-focused computers, but they're often that way because they're prioritizing an open-source-down-to-the-instruction-set ideology (not ready for primetime) or because they want it so small it fits in a cargo pants pocket (GPD Pocket), or they have other ideological attachments (eg Framework). Where Linux is focused on a mobile device that's gotten mainstream attention, it's usually for a specialized use that requires more expensive hardware (eg, SteamDecks and most competitors use a locked-down Arch variant).

The other side is that the used (and renewed, and just-trying-to-clear-old-shit) Windows market is extremely hard to compete with, and almost anyone who's interested in using Linux can install their preferred setup easily. Even mainstream clearing houses like Amazon or NewEgg have a ton of conventional Windows options under 250 USD for the 11"-14" market (caveat: specific sellers not endorsed), and if you're willing to trawl eBay or govdeals you can find stuff at half that price... at the cost of buying used.

Alternatively, anything in particular I should look for/avoid if I'm considering buying new low-end hardware, for the purposes of flipping it over to Linux?

Almost all x86-64 Windows laptops will handle common Linux distros fine. I'd avoid touchscreens unless you're actually going to use them, because disabling them in-Linux can be a little obnoxious, but that's a pretty uncommon issue. If you start looking at gaming the nVidia vs AMD (vs Intel) problem gets more complicated, but at this price range it's just not a choice.

I do recommend getting more RAM than you think you need.

What I don’t understand is why there’s no pushback on increasing the need for certification of the dogs.

It's part of a more-than-thirty-year-old regulation, and the necessary parts of the Department of Justice and Department of Transportation that make up the relevant rulemaking processes are never going to want to get involved in the necessary levels of oversight, nevermind do so with enough clarity and consistency that normal businesses will be willing to take the risk of allowing employees to make a decision. Because a lot of actual enforcement tends to involve veterans, it's a political third rail even for otherwise regulation-skeptical conservatives.

There's some Reason-style pushback, but because there's such a mess for any implementation -- who does the certifications? how do you verify that they aren't just some web template? -- there's no clear better local maxima with a path to reach it short of full prohibition, and there's no political will to do that.

Lutris has good_ish_ GOG support built-in now. There's still some jank, especially for handling multiplayer (thanks, GOG Galaxy!), but if you don't mind checking things out and some slightly longer install times, you may find a lot more of your library has a much better level of support than you'd expect.

RTX depends pretty heavily on the game and card, but games that support it internally are more often workable than not, usually without other jank. DLSS 2.0 is available and pretty well-supported. DLSS 3.0 is more mixed; you're pretty much dependent on proton experimental, but I have seen some games (eg Satisfactory) with it working. I haven't tried or found good answers (either yay or nay) for other people trying RTX Remix for older games.

I took the plunge for The Year Of Linux On The Desktop, starting with a few tiptoes in 2016 and moving my personal computer default boot in 2021. I had long experience with server Linux, and that used to be important, but it's gotten a lot better today. For use cases:

  • ChromeBook replacements / web browser machines: 110%. You can just run Chrome/FireFox/Brave on a local machine, and be happy, or you can install LibreOffice/various calendars/whatever and also have good local offline functionality, if sometimes with a dated UI. The only real downside here is that new laptops running on their Linux compatibility will usually start at four or five times the price. If you're comfortable buying used equipment and swapping out batteries, you can get <150 USD pricing on three-year-old mid-range hardware, but this is extra work and has limited availability.
  • Desktop futzing around: 99.9%. Since 2021, I've had maybe three document files I couldn't open fine with LibreOffice, and about a dozen websites that didn't just accept FireFox-on-Linux-with-working-uBlock as equivalent to Chrome-on-Windows. Video streaming is fine, audio streaming is fine, Discord's updates are a little more annoying but mostly work out of the box.
  • Gaming: 90%, but highly variable. If you're playing mainstream games from Steam that don't use an anti-cheat, and run on an Xorg-based desktop, 99% of games will run with little more than checking a 'run in proton' box. GOG-based games can be a little more annoying (install Lutris to install GOG Galaxy to install No Man's Sky with online support), and I have run into games that didn't work without a lot of extra work, but it's a lot better than I expected. Other games can range from 'one extra step over Windows that's well-documented' (Vintage Story) to a lot of annoyances (Star Citizen, tbf not a Linux-specific thing) to "you have my sympathy" to 'will work, but Bungie will ban you for it' (Destiny 2). Anti-cheat updates can break perfectly-functioning games, and most anti-cheat-focused games simply won't work in Linux. Mods can sometimes break a perfectly-functioning game (which raises some very serious questions about ARK: Survival Evolved's sandboxing). There's been times where I've gone literally years without having to boot to Windows to game (FFXIV, Factorio, and Vintage Story have been pretty great out of the box), and other times where it was once a week.
  • Laptop Use: 95%. I have had some laptops where driver support, especially for things like lid-close hibernate/sleep, either didn't work or wasn't reliable. Fingerprint readers tend to be flaky as well. Battery life can range from better-than-Windows to much-worse as a result. But the core functionality has almost always been there.
  • Server functionality: 99%. Hosting your own file share and calendar setup is pretty trivial with NextCloud, collaborative document editing is a little more tedious but absolutely doable (I used to recommend Collabora, they still work but are a little naggy), Jellyfin is great for local video or media streaming, LLDAP for authentication for serious home server users, so on. My only big complaints are that Calendar sync protocols are a clusterfuck, where each calendar works fine individually but syncing to something like an iPhone's CalDAV support is basically playing Russian roulette, and that setting up your own VPN is still little too hard for nontechnical users.
  • Software- or Hardware-specific use cases: 50% coin flip. Sometimes the Linux-friendly version of a software merely has a learning curve, like compared Blender to Z-Brush; sometimes it's a cliff, like trying to go from Fusion360 to FreeCAD. Some hardware will work out of the gate, some like VR headsets might be a couple hours of fucking with text files and the command line, some is 'just build your own driver' level bad. Software built specifically to interface with hardware can be especially frustrating: Carbide Create works surprisingly well, LycheeSlicer was a crashomatic for the better part of a year, and sometimes even stuff that should work sometimes breaks in weird ways (how did Prussia fuck up their slicer?). Audio decks are notoriously hit or miss; drawing tablets (especially w/ pressure sensitivity) can be annoying.
  • Phone replacement: 1%. The ZeroPhone project hasn't updated since 2019, and a variety of competitors have simply crashed and burned. The PinePhone and Librem are probably the best options out there, but they're still pretty awful as phones. You can technically throw something together with a pocket computer and some VOIP software, and I've done it, so it's technically possible, but even as a pretty high-use techie I can't really make the argument for doing so no matter how much I want to.

However, there are some caveats, sometimes serious ones:

  • The Linux file management system is difficult for normal users to adjust around, especially for desktop environments where data horders might have three or four drives. There's ways to make it more understandable, but virtually no distro will do so by default, and a lot of tools will actively get in your way -- even the otherwise excellent Lutris and Steam launchers are prone to spreading config files across a million weird directories.
  • Trying to convert existing Chromebooks to Linux can be doable, but is seldom worth it, and it's not always even possible.
  • Xorg (most linux distros) is still more reliable than Wayland (hypr, COSMIC, Enlightenment), especially on nVidia hardware. I'd expect that this changes in the next 6 months to two years, but if you have extremely low tolerance for rare (two-three times per hour) flicker in some games, this can be a serious thing to consider early. I have no idea why or how Minecraft is one of the games most prone to it, though. When this does get better, most Xorg-based distros will probably just switch over for you or at the next distro-upgrade.
  • You will generally have to opt-in for 'proprietary drivers', both for dedicated graphic cards and for certain web browser codecs. The open-source ones are actually getting a lot better (uh, more so for AMD than nVidia), but at best expect performance loss, and sometimes stuff won't work. The web browser ones will give you a header notification and handle the install on their own when you actually need it, but the gpu drivers can require you to touch the driver manager tool -- Linux Mints is very easy, but Arch and OpenSUSE can involve some command line work.
  • Prepare an automated backup option, ideally more than one. Windows and Chrome do a lot to protect naive users, at the cost of OneDrive breaking a ton of shit, but most Linux distros at best will do some on-system backups or version rollbacks. You're very unlikely to need them -- I've had only one break across three machines in four years, and that was because of a Microsoft fuckup I was able to work around without absolutely needing the backup -- but when you need them it's often too late to hope. Windows users should do this too, but it's more essential for Linux.
  • Some normal users have zero tolerance for change or frustration, especially on their main desktop computer. I would strongly recommend starting with Chromebook replacements and your own machine well before giving a non-technical user the same thing on their main system.
  • I like tiling window managers, and hyprland looks really nice, but if you're not the sorta techie that likes learning new conventions it's a big lift. Would definitely not recommend for normal users.
  • If you dual boot (which I recommend!) or have nvidia graphics, the default Windows EFI partition at 100 MB is wayyyyyyy too small, and will result in weird and hard-to-diagnose bugs. Resizing (or even modifying it) from within Windows is an absolute nightmare, so look for guides on how to do it in your linux distro, and do it early. 600 MB is overkill, but will save you a lot of frustration down the road. This matters a lot less for machines without dual boot, and with just integrated graphics cards.
  • Most distros will be 'regular release', meaning that while they provide reliable normal updates, certain big changes in feature or function set will only occur with once-a-year-or-less version updates. Upgrading from one version to another (usually called a distro upgrade) can range from 'a single command and five minutes' to 'cross your fingers and hope' to 'nope', and most fall in the middle. Old distro versions can sometimes keep getting security updates for years, but even the long-term support versions will eventually run dry. The alternative is called rolling release, where you just get whatever package version is newest and passed whatever stability checks your distro's maintainers run. This keeps you closer to the power curve, but means you can get a lot of often-pointless upgrades (no, VSCode, I don't need a twice-weekly version update for a glorified json linter) and can rarely find problems because of weird cross-library or cross-program compatibility issues.
  • "Stable is late, experimental is broken", at the risk of quoting someone I can't find quickly. Especially for web browsers, it's important to make sure that you're keeping up to date: while Linux desktop is much less of a target for various computer crud than Windows (or even Mac), stuff that attacks just your browser or just a single service can absolutely wreck that service if it gets months or years out of date. Worse, because of the above, regular release distros will eventually just stop providing any updates at all for nearly everything, and this can be in a much shorter timeframe than in Windows environments (eg, Linux Mint usually gives 5 years for all LTS versions, Ubuntu technically goes to ten if you subscribe, some distros will just shrug and say about three).
  • While distros sell themselves based on UI and various concept specializations, for the most part they're really defined by their package manager, default package repositories, and (where present) app store. Even these things can eventually be swapped out, though it's usually painful enough that it's easier to do a reinstall instead.

For distros:

  • Linux Mint (Cinnamon edition) is kinda the default option: debian-based, robust, very well-supported, lots of good functionality out of the box outside of the Ubuntu or Debian packages, obviously not-Windows enough that it doesn't feel like you're tricking people, but very similar in design assumptions.
  • Ubuntu used to be a good choice, but they've increasingly thrown the mandate of heaven in the trash, and their app store and desktop environment has become a mess as a result, and the telemetry situation is, while nowhere near as bad as Windows, still rough. They still work, just wouldn't be my first selection anymore. If you're considering it, instead I'd point to kubuntu; it's avoided a lot of the worst bloatware, if only by accident. Just be careful with the app store.
  • You can just install Debian. It's nowhere as difficult as Arch, and while it will not have a lot of the useful packages installed by default, most mainstream stuff is in their package repository without any serious problems.
  • Gaming-focused distros exist, but they're mostly some nice UI on top of a normal distro, rather than some serious change in functionality or design. Batocera or RetroBat can be useful despite that, but I wouldn't recommend them for anything but a dedicated gaming machine, and usually only where you're basically making a console replacement -- for a normal desktop, you can just install EmulationStation on almost every mainstream distro. I've heard Nobara falls here, but I haven't tried it.
  • I don't recommend Arch for your first linux install. It's a good exercise to understand how operating systems actually work, but you can (and your first time, probably will!) install the distro without a dhcp client, git client, or text editor. That said, while it's not the only rolling-release distro, it is one of the best-known and, imo, best-supported.
  • Manjaro is in a similar boat to Arch, with the added downside that the easy installer will absolutely get too far up an Arch creek without a paddle. About the only benefit is that Manjaro vets updates a little bit more, but imo not really enough to make a big difference.
  • Alpine/AntiX aren't really great for desktop usages because their distro upgrade situation is generally pretty bad, but they do work great for lightweight Chromebook replacements -- fast boot, good security updates, very lightweight.
  • ElementaryOS is a great Mac-like environment, and Zorin as a very Windows-meets-Chromebook-like. I've got...mixed feelings about these: trying to trick someone into not knowing that they're on Linux is an awful idea, and Zorin has a bad purchase schema on top of that. But if you just really like the more traditional UIs, they're not bad options. Caveat: you can just install Elementary as a desktop environment on top of other distros.
  • Gentoo is great for every usecase you don't need. If I had three hundred identical laptops that I wanted to set up exactly to a limited set of specifications, Gentoo is a great tool. If I just want to run a local machine and I don't know what I need, it's a lot of extra work for very little benefit. Wonderful toolset that I'll literally never recommend, because if you need it you know.
  • Kali/Parrot/whatever security-focused linux, mostly these are just convenience options, usually just extra stuff strapped onto Debian. If you're not trying to do security research, you don't need these; if you are, you're mostly only going to use them so you don't have a thirty-page install list.

Don't go too deep into the What Distro questions. There's a million one-offs or specialized distros that do a lot, or have a prebuilt user interface that's just that little bit better, or has a slightly nicer support forum, or comes with a lot of tools that exactly match your use case. These can often be great things! But finding support can be much harder, and they can be behind the power curve, and it ultimately isn't that big of a deal, and you don't need to get overwhelmed by your choices. For a shorter version of Just Your First Linux Distro:

  • If you just want a Linux setup that Just Works, go with Mint (Cinnamon for a desktop or newer laptop, XCFE for an 5+year-old laptop). The UI will have a learning curve, but there's decent UI for nearly everything, the start search menu will redirect a lot of common windows tools to their linux counterpart, and it's just generally a good to a great experience.
  • For very light-weight uses (such as reviving a 7+ year-old laptop), probably AntiX. For very old systems (>10 years), Puppy Linux.
  • To minimize retraining, consider and try out on a non-critical machine Zorin (for Windows) or ElementaryOS (for Mac). These may still flunk the Wife Test, or may only pass it for some but not all use cases, and you should be crystal-clear that they are a new different thing and not just a Windows skin, but they'll have the least friction.