blooblyblobl
Battery-powered!
No bio...
User ID: 232
Bottomless Pit Supervisor made me feel like the floor dropped out from under me. Like we were headed for something dangerous. A lot of people had that reaction to GPT-3.5 when it started actually looking dangerous, but this stupid greentext was so perfect that it gave me that cliff's edge vertigo nearly a year early on GPT3. Every time I hear about AI progress, I think back to this meme as the moment I knew we were screwed.
There's basically always been an operation consisting of thousands of people looming just over the horizon, for more than a decade prior. Getting a few thousand guys together to cross the border and wreak havoc isn't much of a challenge, particularly given the very small size of Gaza and the distributed storage and management of weaponry across individual Hamas members - sending a few kids on foot or on bikes to spread the word on impending assault destinations and times is very easy, everyone mostly brings weapons they already had been given weeks/months/years ago for just such an event, and if the groups and destinations are determined even a little in advance then there's practically nothing left to do but go. I wouldn't be surprised if operations of that scale could be called up in a few hours, even factoring in planning time. And as others in the thread have noted, even well-prepared defenders can get caught with their pants down if the enemy makes an unexpected-enough move, so most of the ground-level chaos was caused while the IDF was still figuring out what was even happening.
What raises eyebrows is the size of the stockpiles of weapons, particularly the thousands of rockets launched out of Gaza on the day of the assault. Stuff that blows up doesn't tend to last long in Gaza, and the IDF regularly conducts operations to clear out ammo warehouses. Either they've somehow systematically missed thousands of stockpiled rockets over several years, implying Hamas has been unusually effective at keeping them out of sight over a prolonged period with many changing leaders... or a whole ton of rockets arrived at once from some sponsor, and were smuggled in on very short order by unknown means. I'd bet money on the latter.
My point is, it's entirely possible that a single well-exploited mistake allowing rockets to be smuggled into Gaza by the thousands was the difference between havoc and status quo for Israel this week. Right now, I don't think we can realistically conclude much about the competence of Mossad or western intelligence from single catastrophes, other than "they aren't perfect"; though I expect in the coming weeks we'll see lots more narratives and fingerpointing as Israel tries to understand how this happened and how to prevent it in the future. I definitely don't think there's any need to reach for conspiracy to explain the magnitude of the event, either; it's a sufficient, but hardly necessary, explanation to yield this outcome, and right now there's enough grieving people seeking retribution against someone as an emotional relief valve that basically any publicly visible conspiracy investigation is unquestionably compromised by emotion.
My best guess is, Hamas and Iran pulled a single good trick on Israel, and this sort of disaster was always one bad day away.
Create a Precision Repeat Offender Program (PROP)
A bit on the nose, eh? I can't tell if this was meant as a verb (i.e. the single line in this proposal propping up the rest of the bloviation), or as a noun (theater object to facilitate a more realistic performance). What a masterstroke.
Others have already mentioned that prosecution in the US is conditioned on local politics, that retail employees basically have their hands tied in responding to theft, and that there is a cultural factor encouraging and normalizing shoplifting and theft in subsets of big-city populations from a young age. This is probably the bulk of it. I'll add an unverifiable but anecdotally-reinforced personal theory that might have some effect on low-repetition shoplifting adding to overall increases from 2020 to end of 2022: I suspect wearing masks during the pandemic years (and indeed long after them for some people) tipped the scales on the risk-reward calculus for a lot of people, because a lot of people (wrongly) think they might get caught by facial recognition.
The local Walmart has large, conspicuous cameras set up at the doorways, ostensibly for recording crimes to be used in subsequent investigation or prosecution. That such subsequent investigation is rarely conducted is beside the point - to the average person, it sounds risky to get their face caught on camera while they're doing crimes. The marginal shoplifter could always wear a mask, but when they're the only one hiding their face, wearing their hoodie over their head indoors, or generally acting weird around cameras, they tend to stick out - it's not a stretch to imagine that the marginal shoplifter who's concerned about getting caught by cameras might also be concerned about looking obviously super suspicious. But when everyone's wearing a mask, hiding your face from the cameras is a free side-effect of a different normalized behavior. This might have emboldened a lot of marginal shoplifters.
This sounds convoluted, and probably doesn't have a huge impact relative to the other mentioned explanations... but a couple of low-income friends have insinuated to me that the masking requirements made them a lot less worried about getting caught by facial recognition in their own personal escapades, so I don't think I can discard it outright.
~576MP streaming at 30FPS with a FOV of 120 degrees
This is not quite right. Eyes have a huge overall FOV, but the actual resolution of vision is a function of proximity to foveation angle, and there's only maybe a 5° cone of high-resolution visual acuity with the kind of detail being described. Just taking the proposed 120° cone and reducing it to 5° is more than a 99% reduction in equivalent megapixels required. And the falloff of visual acuity into peripheral vision is substantial. My napkin math with a second-order polynomial reduction in resolution as a function of horizontal viewing angle puts the actual requirements for megapixel-equivalent human-like visual "resolution" at maybe a tenth of the number derived by Clark. None of that is really helpful to understanding how to design a camera that beats the human eye at self-driving vision tasks though, because semiconductor process constraints make it extremely challenging to do anything other than homogenously spaced CCDs anyway.
On top of that, the "30FPS" discussion is mostly misguided, and I don't actually see that number anywhere in the text; I only see a suggestion that as the eye traverses the visual field, the traversal motion (Microsaccades? Deep FOV scan? No further clarity provided) fills in additional visual details. This sounds sort of like taking multiple rapid-fire images and post-processing them together into a higher-resolution version, something commercial cell phone cameras have done for a decade now. This part could also be an allusion to the brain backfilling off-focus visual details from memory. It's unclear what was meant.
especially if you expect to catch up with the 14 stop DR, which might not even be possible with current sensors.
This is already a solved problem, and has been for at least five years. Note that in five years, we've added 20dB dynamic range, 30dB scene dynamic range, bumped up the resolution by >6x (technically more like 4x at same framerate, but 60FPS was overkill anyway), and all that in a module cost that I can't explicitly disclose but I can guarantee you handily beats any LIDAR pricing outside of Wei Wang's Back Alley Shenzhen Specials. And it could still come down by a factor of 2 in the next few years, provided there's enough volume!
In any case, remember that the bet isn't beating the human eye at being a human eye, it's beating the human eye at being the cheap, ready-today vision apparatus for a vehicle. The whole exercise of comparing human eye performance to camera performance is, and has always been, an armchair philosopher debate. It turns out you don't need all the amazing features of the human visual system for the task of driving, this is sufficient but not necessary for a solution to the problem. You need a decent performance, volume-scalable, low-cost imaging apparatus strapped to a massive amount of decent performance, volume-scalable, low(ish)-cost post-processing hardware. It's a pretty safe bet that you can bring compute costs down over time, or increase your computational efficiency within the allocated budget over time. It's also a decent bet that the smartphone industry, with annual camera volumes in the hundreds of millions, is going to drive a lot of that camera manufacturing innovation you need, bringing the cost down to tens of dollars or better. Most of the image sensors are already integrating as much of the DSP on-die as possible, in a bid to free up the post-processing hardware to do more useful stuff, and that approach has a lot of room to grow in the wake of advanced packaging and multi-die assembly innovations in the last ten years. All the same major advances could eventually arrive for LIDAR, but it certainly didn't look that way in 2012, and even now in 2023 it still costs me a thousand bucks to kit out an automotive LIDAR because of all the highly specialized electromechanical structures and mounting hardware, money I could be using to buy a half-dozen high-quality camera modules per car...
As far as reaction time, real-time image classification fell to sub-frame processing time years ago, thanks in part to some crazy chonker GPUs available in the last few years. There's a dozen schemes for doing this on video, many in real-time. The real trouble now is chasing down the infinitely long tail of ways for any piece of the automotive vision sensing and processing pipeline to get confused, and weighing the software development and control loop time cost of straying from the computational happy path to deal with whatever they find.
This is also why I think Tesla's software just sucks. It's not the camera hardware that's the problem any more, and the camera hardware is still getting better. There's just no way not to suck when the competition is literally a trillion-dollar gigalith of the AI industry that optimized for avoiding bad PR and waited an extra four years to release a San Francisco-only taxi service. Maybe if Google was willing to stomach a hundred angry hit pieces every time a Waymo ran into a wall with the word "tunnel" spray-painted on it, we'd have three million Waymos worldwide to usher in a driverless future early. I doubt Amazon has any such inhibitions, so I guess we'll find out soon just how much LIDAR helps cover for bad software.
Your "doesn't get the job done" link doesn't seem to go anywhere... I had to clip out everything past the "mediaplayer" portion of the URL to get to the video, where a tesla slams into a test dummy. But it doesn't take much work to find counterexamples, and this wouldn't be the first time someone fabricated a safety hazard for attention.
I don't think LIDAR is as big of a differentiator as tech press or popular analysis makes it out to be. It's very expensive (though getting cheaper), pretty frail (though getting more durable), and suffers from a lot of the same issues as machine vision (bad in bad weather, only tells you that landmarks have moved rather than telling you anything you can do with this info, false positive object identification). And this is trite, but remains a valid objection: human vision is sufficient to drive a car, so why do we need an additional, complex, fragile sensor operating on human-imperceptible bandwidth to supplement cameras operating in the same bandwidth as human eyes?
Tesla's ideological stance on machine vision seems to be: if camera-based machine vision is insufficient to tackle the problem, we should improve camera-based machine vision until it can tackle the problem. This is probably the right long-term call. If they figure out how to get the kind of performance expected from a self-driving system out of camera-based machine vision, not only have they instantly shaved a thousand bucks of specialty hardware off their BOM, arguably they've developed something far more valuable that can be slapped on all variety of autonomous machines and robotics. If the fundamental limitations are in the camera, they can use their demand in automotive as leverage to encourage major camera sensor manufacturers to innovate on areas where they currently struggle (high dynamic range, ruggedness, volume manufacturability). Meanwhile, there's a whole bunch of non-Tesla people working independently on many of the hard problems in the software side of machine vision; some of the required innovations in software don't necessarily need to come from Tesla. And if it does need to come from Tesla, they've put enough cameras and vehicle computing out in the wild by now that they could plausibly collect a massive corpus of training data and fine-tune it better than pretty much any other company outside of China.
Google, meanwhile, had years of headstart on Tesla, a few hundred billion dollars of computers, at least one lab (possibly several) at the forefront of machine vision research, extremely deep pockets to buy out tens of billions of dollars of competitors and collaborators, limited vulnerability to competitive pressure or failure in their core revenue stream, and a side business mapping the Earth compelling them to create a super-accurate landmark database for unrelated business ventures. I think the reason Google's self-driving vehicles work better than Tesla's is because Google held themselves to ludicrously high standards, half of which were for reasons unrelated to self-driving, and the likes of which are probably unattainable for more than a handful of tech megacorps. That they use LIDAR is immaterial - they've been using it since well before the system costs made commercial sense.
As for the rest of Tesla's competitors... when BigAutoCorp presents their risk management case to the government body authorizing the sale and usage of self-driving technology, it sounds a lot more convincing to say "cost is no obstacle to safety" as you strap a few thousand bucks of LIDAR to every machine and spend another few dozen engineering salaries every year on LIDAR R&D. A decade of pushing costs down has brought LIDAR to within an order of magnitude of the required threshold for consumer acceptance. I'll note that comparatively, camera costs were never an obstacle to Tesla's target pricing or market penetration. Solving problems with better hardware is fun, but solving problems with better software is scalable.
That's not to say Tesla's software is better though. I can't tell if Tesla's standards are lower than their competitors, or if their market penetration is large enough that they have a longer tail of publicized self-driving disasters to draw from, or if there's a fundamental category of objects their current cameras or software can't properly detect. Speaking from experience, I've seen autopilot get very confused by early-terminating lane markers, gaps in double yellow for left turns, etc. I think their software just kinda sucks. It's probably tough to identify the performance differences in good software with no LIDAR and bad software with LIDAR; comparatively much easier to identify bad software with no LIDAR. And really easy to blame the lack of LIDAR when you're the only people on Earth foregoing it.
Phonological and morphological awareness do seem to be well-correlated with literacy outcomes both in alphabetic and non-alphabetic languages, and there's a lot of meta-analyses which show about the same low-moderate correlation in Chinese primary language learners as in English primary language learners. There's some studies that show cross-language transfer in English/Chinese bilingual households for phonological and morphological awareness, but no such transfer for orthographic awareness, which seems to suggest there's something fundamental about the cognitive process of organizing and mapping the set of graphemes and meaningful constructive subsequences in the written language to its equivalent phonetics and trivial phonetic expansions, which is independent of a language's orthographic characteristics.
In any case, I don't think there's any dispute that written Chinese has semi-consistent phonological and morphological structure. The majority of Chinese characters are horizontally structured phono-semantic compounds with a semantic left radical and a phonetic right radical (maybe 70-80%); around half are phonologically regular regardless of tone; and there's only a few hundred common semantic and phonetic radicals. There's clearly a massive encoding and decoding efficiency achieved through semi-consistent phonological mapping of the orthography.
It's really hard to find trustworthy or low-bias takes on this topic. There's a vivid, unsettled debate about how exactly the Chinese literacy rate improved (from <20% in the 1950s, to ~96% by the 2010s), and to what extent the introduction of simplification, pinyin, etc played a role. People get downright vicious in these discussions because they tend to get deeply involved in Chinese idpol culture wars. The debate has its own Wikipedia page. I don't place that much confidence in my understanding of how it all fits together, especially through the fog of culture war - this is a mind-bogglingly complex topic. My basic understanding is that opinions vary widely, from believing that simplification (and possibly pinyin, significantly more controversial) fundamentally enabled mass literacy, to believing it was purely a result of herculean educational investment and widespread literary access (and even that simplification/pinyin was reformist nonsense or foreign interference), with huge diversity of opinions on the relative weights of every effect within that spectrum. It seems fairly uncontroversial that China pre-1950s did not have widespread educational access, and that what access existed was often printed in traditional characters or unique regional characters that the masses could not feasibly learn without dedicated scholarly investment. The literacy rates are also undisputed. I don't think it has much relevance to the question of how Chinese children today learn to read, but it's nevertheless an interesting sideshow.
Speaking as someone in the chip industry, we most certainly do rely on China.
The Chinese market is massive, and was, until recently, growing at an eye-watering pace. I know of a few companies that took 20%+ off the balance sheets permanently when the Huawei sanctions hit a few years back. Even if the latest sanctions target advanced capabilities and leading-edge chips, these are still the centerpieces of designs with millions of units of volume (particularly in telecom, for 5G deployment and Chinese Android phones sold across the world), and less-advanced companies had many roles in these systems which are now jeopardized. Chinese electronics and electronics-adjacent industries, even those not relying on advanced chips or tools, are no doubt eyeing the latest round of sanctions with concern that their niche will be next. Semiconductor sales volume to China is going to slow down a lot for the next year or two, which is going to do damage to companies whose growth strategy was dependent on the continued growth of that market.
I'm less knowledgeable about the specifics on this part, but I also recall as little as a few years ago that the semiconductor packaging expertise cultivated in China is unrivaled, particularly its ability to scale. The more advanced devices nowadays bond the die to a PCB-like substrate material with extremely fine pitch routing on many layers of high-density film, to famousfanout the contact points on the die to reasonable pitch and to improve signal/power integrity. While in theory the manufacture of the substrate and the bonding of the die can be done anywhere, China offered an unrivaled combination of rapid turnaround, high volume, and excellent quality (provided you knew where to look). There's a lot more packaging techniques developed and scaled in China, I just picked this as an example I remember; with chiplet designs for processors and chip-stacking technologies for flash memory, packaging is getting more demanding by the day. There's no explicit sanctions on this packaging equipment as far as I can tell - packaging is something the fab can contract out to a third party, and I suspect the sanctions are targeted narrowly on fab companies. Will large US semiconductor companies still need to process their finished dice in China, presenting additional risks for export control? Will the US summon up another round of sanctions to decapitate the packaging industry as well? Perhaps the industry has quietly de-risked itself over the last few years, but I can't find evidence of this with trivial googling.
Anyway... we do rely on China, quite a lot, for both market size and post-fab manufacturing. Sanctions aren't doomsday, but definitely more than a haircut.
And that's before considering the possibility of TSMC catching some "errant" missiles in a hypothetical conflict (much less hypothetical than two weeks ago, to boot), knocking over more than half of worldwide advanced semiconductor production.
The report in source 2 of gp actually addresses this...
Research by the Office of Immigration Statistics replicates the Fazel-Zarandi et al. methodology and assesses the possibility that the size of the unauthorized population was in the range of 16.2–29.5 million on January 1, 2017 as Fazel-Zarandi et al. conclude, rather than 11.4 million as the DHS residual model estimates. One key finding is that the difference between FazelZarandi et al.’s results and DHS’s residual model is entirely driven by high estimated growth in Fazel-Zarandi et al.’s model during the 1990s—yet key data required for inflow-outflow modeling are not available for those years. These data limitations, along with a number of questionable modeling assumptions, give DHS no confidence in Fazel-Zarandi et al.’s findings about population growth in 1990-2000. A forthcoming DHS whitepaper includes a preliminary inflowoutflow analysis that is similar to the Fazel-Zarandi et al. method but updates certain assumptions and makes fuller use of DHS data for 2000 – 2018; the paper finds support for the DHS estimate of about 11.4 million people as of Jan. 1, 2018 (Rosenblum, Baker, and Meeks, forthcoming).
Can you help me understand how you arrived at the conclusion that visa overstays are the largest group of illegal immigrants in the US? I looked at the overstay reports and I see a somewhat consistent estimate of about 700k per year. 25% of 2M is 500k, but it only represents actual encounters, so I'd expect this number to be the sum of the encounters released and the non-encounters. If even 10% more illegal immigrants are crossing without an encounter, it seems to me that the rate of growth of non-visa overstay illegal immigrants is larger, especially as of the last few years. Is the argument that the total visa overstay population is still larger than the total illegal southern border crossing population? I didn't see estimates for either of those numbers in the overstay reports.
- Prev
- Next
Nominally:
Sometimes the output halted in weird spots, and you could push it a little further with some extra input. So in some cases, you'll see obvious prompt continuation.
In practice, the highlighting had a lot of issues, and it frequently over- or under-represented the amount of AI-generated content. The original author might have explained somewhere on Twitter how much prompt continuation was needed vs how much was just GPT-3 having weird issues. Or maybe the whole thing is secretly fake and green highlighting was added in post. Given the widespread production of similar bottomless pit greentexts in the wake of the original, I think it's probably real output.
In some sense, being a cleaned-up prompt continuation stitch feels a little bit like bumper rails at the bowling alley. It's a lot easier to perform well when you get that much additional guidance. Arguably the whole punchline is human-written, which moves the goalposts for this accomplishment from writing a spectacular joke unaided to filling in the world's most obvious madlibs blank... But remember, it only feels obvious to you and me. Out of all the words in the English language, GPT-3 correctly predicted the funniest one. It has a literal sense of humor. And that's pretty scary.
More options
Context Copy link