@rayon's banner p

rayon

waifutech enthusiast

3 followers   follows 0 users  
joined 2023 August 17 08:48:30 UTC

				

User ID: 2632

rayon

waifutech enthusiast

3 followers   follows 0 users   joined 2023 August 17 08:48:30 UTC

					

No bio...


					

User ID: 2632

The House Select Subcommittee on the Coronavirus Pandemic has published its final report on the results of their investigation (dated from December 4th for some reason). It's quite the whopper at 520 pages and I'm only starting to read through the thing, but they tackle one of the big scissor issues of the issue - the origin of the virus - right at the start so there is a good hook right away. I have not read the Fauci emails so some of this might be old news, but some of those include rather damning excerpts.

According to the report, what would eventually become Proximal Origin started on Feb 1 with a write-up by Kristian Andersen who has Noticed™ some concerning biological properties of the virus which did not strike him as natural. He contacted Jeremy Farrar over this, who acknowledged his concerns and referred him to Fauci; Fauci was appropriately alarmed and shortly arranged a conference call to discuss the findings. Andersen mentions that the talk they had before the call was his first time talking to Fauci, and that he "specifically suggests that if [Andersen] thinks this came from the lab, [he] should consider writing a scientific paper on it."

So he does - apparently encountering inconvenient difficulties along the way. Feb 8, in an internal email from Andersen (p.24):

A lot of good discussion here, so I just wanted to add a couple of things for context that I think are important - and why what we're considering is far from "another conspiracy theory", but rather is taking a valid scientific approach to a question that is increasingly being asked by the public, media, scientists, and politicians (e.g. I have been contacted by Science, NYT, and many other news outlets over the last couple of days about this exact question).

<...> Our main work over the past couple of weeks has been focused on trying to disprove any type of lab theory, but we are a crossroad where the scientific evidence isn’t conclusive enough to say that we have high confidence in any of the three main theories considered. <...>

Feb 20, in another email from Andersen as the work continues (p.25):

<...> just one more thing though, reviewer 2 is unfortunately wrong about "Once the authors publish their new pangolin sequences, a lab origin will be extremely unlikely". Had that been the case, we would of course have included that - but the more sequences we see from pangolins (and we have been analyzing/discussing these very carefully) the more unlikely it seems that they're intermediate hosts. They definitely harbor SARS-CoV-like viruses, no doubt, but it's unlikely they have a direct connection to the COVID-19 epidemic.

Unfortunately none of this helps refute a lab origin and the possibility must be considered as a serious scientific theory (which is what we do) and not dismissed out of hand as another ‘conspiracy’ theory. We all really, really wish that we could do that (that’s how this got started), but unfortunately it’s just not possible given the data.

Emphasis mine. There are already hints of a foregone conclusion, but it doesn't seem bad yet - however Jeremy Farrar, who referred Andersen to Fauci earlier, seems to have different concerns. Same page, email from Farrar (emphasis mine):

I hope there is a paper/letter ready this week to go to Nature (and WHO) which effectively puts to bed the issue of the origin of the virus.

I do think [it's] important to get ahead of even more discussion on this, which may well come if this spreads more to US and elsewhere, and other "respected" scientists publish something more inflammatory.

He later gets notified via email that "rumors of bioweaponeering are now circulating in China", to which his response is:

Yes I know and in US - why so keen to push out ASAP. I will push Nature

Same page, another email from Farrar to Andersen reviewing (some version of) the draft:

Sorry to micro-manage/microedit!

But would you be willing to change one sentence?

From "It is unlikely that SARS-CoV-2 emerged through laboratory manipulation of an existing SARS-related coronavirus."

To "It is improbable that SARS-CoV-2 emerged through laboratory manipulation of an existing SARS-related coronavirus."

That's... certainly one sentence, I suppose.

I'm still reading but from a cursory glance the report tackles many topics, including the government response, the lockdowns, economic impacts, etc. I think many people will find their hobby horse something of interest in here. Discussion thread go.

Yep it worked (at least the website says it did), thanks a lot! Now I can personally participate in crashing the servers at launch.

Kind of, but it's not as big a hurdle as you imagine it to be, though you do have to at least loosely keep up with new (= more filtered) snapshot releases and general happenings. It also depends on the exact things you do, you probably don't need the big-dick 2k token stuff for general conversation, ever since I burned out on hardcore degeneracy I haven't really been updating my prompts and they still mostly work on the latest GPT snapshots when I'm not doing NSFW shit.

As for jailbreaks, this list is a good place to start. Most jailbreaks come in the form of "presets" that rigidly structure the prompt, basically surrounding the chat history with lots of instructions. The preset's .json can be imported into frontends like SillyTavern with relatively little hassle, the UI can be intimidating at first but wrangling prompts is not actually difficult, every block of the constructed prompt has its own content and its own spot in the overall massive prompt you send to the LLM. Example. The frontend structures the prompt (usually into an RP format) for you, and during chat you only need to write your actual queries/responses as the user, with the frontend+preset taking care of the rest and whipping the LLM to generate a response according to the instructions.

Unless you're just talking to the "bare" LLM itself, this approach usually needs a character card (basically a description of who you're talking to), I mentioned those in passing elsewhere.

To contextualize all this, I unfortunately have no better advice than to lurk /g/ chatbot threads, it's smooth sailing once you get going but there's not really a single accessible resource/tutorial to get all this set up (maybe it's for the better, security in obscurity etc).

I've been on the fence about shelling out since the announcement, if you're offering I'd be glad to have it. How does it work, is it just a separate Steam key? None of my friends bought the higher-tier packs.

I'll echo the responses below and say that 3.5 is... suboptimal, much better and nearly as accessible alternatives exist. Claude 3 is the undisputed king of roleplay and I've sung it enough praises at this point, but it is much less accessible than GPT, and to be fair 4o is not actually that bad although it may require a decent jailbreak for more borderline things.

Aside from that, RP-related usage is best done through API (I believe you can still generate a GPT API key in your OpenAI account settings, not sure how you legitimately get a Claude API key) via specific frontends tailored for the task. This kills two birds at the same time - you get mostly rid of the invisible system prompts baked into the ChatGPT/claude.ai web interface, and chat frontends shove most of the prompt wrangling like jailbreaks, instructions and Claude prefills under the hood so you're only seeing the actual chat. Frontends also usually handle chat history more cleanly and visibly, showing you where the chat history cuts off in the current context limit. The context limit can be customized in settings (the frontend itself will cut off the chat accordingly) if you want to moderate your usage and avoid sending expensive full-context prompts during long chats, in my experience 25-30k tokens of context is the sweet spot, the model's long-term recall and general attention starts to slowly degrade beyond that.

Agnai has a web interface and is generally simple to use, you can feed it an API key in the account settings. SillyTavern (the current industry standard, as it were) is a more flexible and capable locally-hosted frontend, supporting a wide range of both local and corpo LLMs, but it may be more complicated to set up. Both usually require custom instructions/prompts as the default ones are universally shit, unironically /g/ is a good place to find decent ones. Beware the rabbit hole Feel free to shoot me a DM if you have any questions.

People forget how ridiculously compressed LLMs are compared to their training corpus, even if you spill an amount of personal info, it has little to no chance of explicitly linking it to you, let alone regurgitating it.

That is true of course, but I read @quiet_NaN's comment as less concerned about models having their data "baked into" newer models during training (nowhere on the internet is safe at this point anyway, Websim knows where we live), and more about the conversations themselves physically stored, and theoretically accessible, somewhere inside OpenAI's digital realm.

While I'm sure manually combing chatlogs is largely untenable at OpenAI's scale, there has been precedent, classifier models exist, and in any case I do not expect the Electric Eye's withering gaze to be strictly limited to degenerates for very long.

Considering OpenAI's extensive, ahem, alignment efforts, I think using GPT in its current state as a therapist will mostly net you all the current-year (or past-year rather, I think the cutoff is still 2023?) progressive updates and not much beyond that. Suppose you can at least vent to it. Claude is generally better at this but it's very sensitive to self-harm-adjacent topics like therapy, and you may or may not find yourself cockblocked without a decent prompt.

what do people think about therapy becoming AI?

I'm quite optimistic actually, in no small part because my shady source of trial Claude has finally ran dry last month and I hate to say I'm feeling its absence at the moment, which probably speaks to either my social retardation or its apparent effectiveness. I didn't explicitly do therapy with it (corpo models collapse into generic assistant speak as soon as you broach Serious Topics, especially if you use medical/official language like that CIA agent prompt downthread) but comfy text adventures are close enough and I didn't realize how much time I spend on those and how salutary they are until I went cold turkey for a month. Maybe the parable of the earring did in fact have some wisdom to it.

Despite my past shilling I'm so far hypocritically valiantly resisting the masculine urge to cave and pay OpenRouter, I don't think there's any kind of bottom in that particular rabbit hole once you fall in, scouring /g/ is at least more of a trivial inconvenience than paying the drug dealer directly.

"You shouldn't get to have a decision on AI development unless you have young children". You don't have enough stake.

That strikes me as a remarkably arbitrary line in the sand to draw (besides being conveniently self-serving) - you can apply this to literally anything that is not 100% one-sided harmless improvement.

You shouldn't get to have a decision in education policy unless you have young children. You don't have enough stake.

You shouldn't get to have a decision in gov't spending unless you have young children. You don't have enough stake.

You shouldn't get to have a vote in popular elections unless you have young children. You don't have enough stake.

What is the relation of child-having to being more spiritual grounded and invested in the human race (the human race, not just their children)'s long-term wellbeing? I'm perfectly interested in the human race's wellbeing as it is, and I've certainly met a lot of shitty parents in my life.

I hope this isn't too uncharitable but your argument strikes me less as a noble God-given task for families to uphold, and more as a cope for those that have settled in life and (understandably) do not wish to rock the boat more than necessary. I'm glad for you but this does nothing to convince me you and yours are the prime candidates to hold the reins of power, especially over AI where the cope seems more transparent than usual. Enjoy your life and let me try to improve mine.

(Childless incel/acc here, to be clear.)

You are making an "argument from incredulity", i.e. the beliefs of Sam Altman are so crazy that they can’t be real. I don't think this is the case.

The idea that Sam Altman would literally want to destroy humanity to birth in a superior AI life form might sound ridiculous to you. But you don't know these people.

Besides this being a gossip thread, your argument likewise seems to boil down to "but the beliefs might be real, you don't know". I don't know what to answer other than reiterate that they also might not, and you don't know either. No point in back-and-forth I suppose.

There's a good chance (not 100%, but not 0% either) that we're going to build superintelligence while the "adults in the room" argue about GDP numbers or whatever. If this happens it could make some people (perhaps a single person) more powerful than anyone in history. Do you want Sam Altman to be that person? Because I sure as hell don't.

At least the real load-bearing assumption came out. I've given up on reassuring doomers or harping on the wisdom of Pascal's mugging, so I'll simply grunt my isolated agreement that Altman is not the guy I'd like to see in charge of... anything really. If it's any consolation I doubt OpenAI is going to get that far ahead in the foreseeable future. I already mentioned my doubts on the latest o1, and coupled with the vacuous strawberry hype and Sam's antics apparently scaring a lot of actually competent people out of the company, I don't believe Sam is gonna be our shiny metal AI overlord even if I grant the eventual existence of one.

Sam is going to get us all killed; that he's entirely misanthropic and sincerely believes that humanity should die out giving birth to machine intelligence.

...Fine, I'll bite. How much of this impression of Sam is uncharitable doomer dressing around something more mundane like "does not believe AI = extinction and thus has no reason to care", or even just same old "disregard ethics, acquire profit"?

I have no love for Altman (something I have to state awfully often as of late) but the chosen framing strikes me as highly overdramatic, besides giving him more competence/credit than he deserves. As a sanity check, how -pilled would you say that friend of yours is in general on the AI question? How many years before inevitable extinction are we talking here?

It's a coder's model I think, not a gooner's model.

I have no hopes for GPT in the latter department anyway, but my point stands, I think this is a remarkably mundane development and isn't nearly worth the glazing it gets. The things I read on basket weaving forums do not fill me with confidence either. Yes, it can solve fairly convoluted riddles, no shit - look at the fucking token count, 3k reasoning tokens for one no-context prompt (and I bet that can grow as context does, too)! Suddenly the long gen times, rate limits and cost increase make total sense, if this is what o1 does every single time.

Nothing I'm seeing so far disproves my intuition that this is literally 4o-latest but with a very autistic CoT prompt wired under the hood that makes it talk to itself in circles until it arrives at a decent answer. Don't get me wrong, this is still technically an improvement, but the means by which they arrived at it absolutely reeks of crutch coding (or crutch prompting, rather) and not any actual increase in model capabilities. I'm not surprised they have the gall to sell this (at a markup too!) but color me thoroughly unimpressed.

... it just might work?

It might, hell it probably will for the reasons you note (at the very least, normies who can't be arsed to write/steal proper prompts will definitely see legit major improvements), but this is not the caliber of improvement I'd expect for so much hype, especially if this turns out to be "the" strawberry.

The cynical cope take here is - with the election on the horizon, OpenAI aren't dumb enough to risk inconveniencing the hand that feeds them in any way and won't unveil the actual good shit (IF they have any in the pipeline), but the vital hype must be kept alive until then, so serving nothingburgers meanwhile seems like a workable strategy.

One interesting thing is for the hidden thoughts, it appears they turn off the user preferences, safety, etc, and they're only applied to the user-visible response.

So o1 can think all kinds of evil thoughts and use it to improve reasoning and predictions

Judging by the sharp (reported) improvement in jailbreak resistance, I don't believe this is the case. It's much more likely (and makes more sense) to make the... ugh... safety checks at every iteration of the CoT to approximate the only approach in LLM censorship abuse prevention that has somewhat reliably worked so far - a second model overseeing the first, like in DALL-E or CAI. Theoretically you can't easily gaslight a thus prompted 4o (which has been hard to jailbreak already in my experience) because if properly "nested" the CoT prompts will allow it enough introspection to notice the bullshit user is trying to load it with.

...actually, now that I spelled out my own chain of thought, the """safety""" improvements might be the real highlight here. As if GPT models weren't sterile enough already. God I hate this timeline.

It's late and I'm almost asleep but let me get this straight: did they just basically take 4o, wired a CoT instruction or three under the hood as an always-on system prompt, told it to hide the actual thinking from the user on top of that, and shipped it? (The reportedly longer generation times would corroborate this but I'm going off hearsay atm) Plus ever more cucked wrt haram uses again because of course?

Sorry if this is low effort but it's legitimately the vibe I'm getting, chain of thought is indeed a powerful prompt technique but I'm not convinced this is the kind of secret sauce we've been expecting. I tuned out of Altman's hype machine and mostly stan Claude at this point so I may be missing something, but this really feels like a textbook nothingburger.

My claim is not about AI in general but only that OpenAI is no longer special.

That much is true, I agree.

But the next development could come from anywhere, even China. Two years ago this wasn't true. Back then, OpenAI was heads and shoulders above the competition.

I agree as well but I'll note the obvious rejoinder - the next development could indeed come from anywhere, even OpenAI. Sure, they get mogged left and right these days, whodathunk propping yourself up as a paragon/benchmark of LLM ability backfires when you drag your feet for so long that people actually start catching up. But this still doesn't negate their amassed expertise and, more realistically, unlimited money from daddy Microsoft; unless said money was the end goal (which to be fair there is nonzero evidence for, as you note downthread) they're in a very good position to throw money at shit and probe for opportunities to improve or innovate. Surely Altman can see the current strategy of resting on laurels is not futureproof right?

As regards Sora. In my mind, it was a neat demo but ultimately a dead end and a distraction. Where's the use case?

Fair enough but still counts as advancement imo, even though tech like that is guaranteed to be fenced off from plebs, no points for guessing what (nsfw?) the usual suspects try to make with "ai video" generators in this vein. I generally stopped looking for immediate use cases for LLMs, I think all current advancements (coding aside) mostly do not have immediate useful applications, until they suddenly will when multiple capabilities are combined at once into a general-purpose agentic assistant. Until one arrives, we cope.

I'm no fan of sama and the cabal he's built, but nonetheless I think it's still too early to write off any major company working on AI-related things right now. I'm not convinced all of the low-hanging fruit has already been picked wrt applications (even dumb stuff like "characterai but uncucked" alone is likely to be a smash hit), and besides most past/present developments were sufficiently arcane and/or undercover that you can't really predict where and what happens next - cf. Chinese LLMs being regarded as subpar until Deepseek, or Anthropic being safety fanatics with only a dumb soy model to their name until Claude 3(.5).

If Sora is anything to go by I think OpenAI still have some genuine aces up their sleeves, and while I don't believe they're capable of properly playing them to full effect, they at least have the (faded) first-mover advantage and Sam "Strawberry" Hypeman to exploit normies boost their chances.

Kinda late to this thread but I have watched and played some of Wukong last week so I will note down my own thoughts about the game itself, isolated from broader cultural context. (I actually managed to completely miss the DEI kerfuffle it reportedly had, gonna look that up)

The good: The presentation is, as the young'uns say, absolute cinema - easily on par with Western hits like God of War (in fact I think it's fair to call Wukong "God of War but Chinese", the parallels broadly hold in most aspects) and imho exceeding them at some points. Major fights in Wukong are exactly what an unsophisticated rube like me pictures in his head when he imagines xianxia - the prologue scene/fight solidly establishes that the sheer spectacle is the main draw of the game, and so far it does not disappoint while still having the difficulty to match, fuck White-clad Noble you are forced to git gud as soon as chapter 1. The game is gorgeous, the monster designs are consistently great, and I physically feel the cultural gap. After so many Western games that SUBVERT LE EXPECTATIONS, seeing a mythical power fantasy played entirely, shamelessly straight feels very refreshing.

The great: Special mention to the animated ending scenes for each chapter, with every one having a completely different art style, and an interactive in-game "tapestry" afterwards that serve as loosely-related loredumps to put things into context. Those are universally amazing, with incredible effort put into throwaway 5-minute segments that aren't even strictly speaking related to the game itself - I checked the credits out of curiosity and every cutscene has a separate fucking animation studio responsible for it! That is an insane level of dedication to your storytelling - foreign as the subject matter is to my uncultured ass, the sheer passion to share your culture and get your point across is still unmistakable. This right here should be the bar for every Chinese cultural export going forward.

The mid: I'm conflicted about combat. On one hand it feels a little floaty to my taste, especially the bread and butter light combos, and you do not get the stone form spell (the parry button of the game) until quite a bit into chapter 2 so the only reliable defensive option you have is dodge roll spamming. On the other heavy attacks are very satisfying to charge and land, and the frequent boss fights are almost universally great and engaging, with very diverse movesets for every one. There don't seem to be any bullshit boss immunities either, the Immobilize spell (which completely stops an enemy for a few seconds) works on pretty much every enemy and boss I've seen so far. Hitboxes and especially delayed enemy attacks can be frustrating at times though.

The bad: The exploration is worse than even Souls games; no map, sparse bonfires and very few notable landmarks scattered over the huge levels are not a recipe for convenient navigation. Maybe it's a skill issue on my part but it's been VERY easy to lose track of where you are (and especially where you were) and miss side content - of which there is a lot, adding to the frustration. To be fair, this is also why Souls games aren't my usual cup of tea.

Overall I think it is a very solid game, especially for the first game of the company, and I think that all the hand-wringing about Chinese bots or whatever is misplaced. It's not a masterpiece - it's just a good, solid game, and "organic" breakout hits of this scale are not unheard of, we had one just earlier this year.

Where have you "heard from many men" about having sex with random objects?

Some people on Mongolian basket weaving forums definitely engineer all sorts of, ahem, devices to this end, I've seen literal manuals involving IIRC gloves and water beads? (for better or worse I don't have the exact link on hand) There's a "community" for everything, the old wisdom seems relevant.

Also, this is the second instance of weird breathless, gushing hatred of the outgroup I've seen here in 24 hours (the first one above my comment got deleted?), which reaffirms my belief that the "weird" attacks are indeed landing spectacularly - and not just on the target demographic. The media sure know how to pick 'em, gotta hand it to them.

I would normally ask for sources but I am convinced enough by your weird intensity (if not your choices of phrasing which read as rather uncharitable, I too have a caricature of an obsessed leftoid residing in my head but I don't talk to him much) that I will uncritically buy it, and instead note that judging by the sheer knee-jerk reactions to the "weird" angles from both sides, the term seems to be primed to become the political hot potato of the year. The involved parties throwing avalanches of stones while frantically reinforcing their glass to make the other guys out as ACTUALLY WEIRD AND GROSS LIKE EWW is going to be very entertaining to watch, especially from a third world shithole. If this'd still been 2016 I'd say reds have it in the bag, but I think blues are swiftly stepping up their meme game so it really can go either way. I humbly retract my complaint that this season is fucking boring, god bless America.

So far seems to be a problem with AWS instances, regular API is reportedly unaffected. According to anons the meltdown is still ongoing.

Curiously this does not seem to have made any news despite going on for the better part of a day, which makes me believe it's not some kind of global outage. Some people even took it as a reason to doompost as some kind of new filtering system that raises temperature to schizo levels when haram inputs are detected, but I doubt it.

Right now, there seems to be an ongoing issue with Claude on Anthropic (AWS?)'s side that makes it completely flip the fuck out and output barely coherent, hilariously schizophrenic stuff. Relevant thread. This isn't really news, iirc this happened before too, but it is funny if you want to see the mad poet off his meds. I'm off to sleep now but I welcome anyone interested to peruse the screencaps posted all over the place.

upd: Someone made a proper compilation here.

Law of Merited Impossibility strikes again?

"X isn't happening Trump is bad at golf AND IF it does he's good THEN it's a good thing who even cares"

For example, when a poster suspected of being trans on 4chan is met with countless replies of “you will never be a woman”, I doubt that those replies’ authors are not intending to cause pain.

In defense of assorted chuds, personally I see this less as a dreadful voodoo curse summarily invoked upon any transperson who dares show their face in chud-adjacent places, and more like a general chastisement against bringing identity politics into supposedly anonymous spaces. YWNBAW is basically "tits or GTFO" of the modern age - an insult that seems general on the surface, but in practice is levied specifically against those who claim to be women to get something out of it, be it attentionwhoring, enforcing consensus or jockeying for clout.

No, I don't think any possible actions, up to and including total surrender, will spark introspection.

(that was the joke)

Besides, {russell:fighting back/lashing out/escalatory course of action} once in a while has a far better track record of effectively stopping bullying than just gracefully taking it.

I don't feel particularly enraged but I do think this post is the most clear-cut example of mistake vs. conflict theory I've seen in years if not ever - an acclaimed grandmaster of mistake theory politely addresses one side of the culture war (I don't have my dictionary but I think a "war" can be pictured as a kind of conflict), helpfully suggests that their course of action may be, well, a mistake, and is shocked to discover the apparent persuasive power of yes_chad.jpg. While I do not dare doubt Scott's ulterior motives and believe he really is this naive principled, I refuse to believe he is not aware of what he's arguing, he is this close to realizing it (emphasis mine):

From the Right’s perspective, <...> the moment they get some chance to retaliate, their enemies say “Hey, bro, come on, being mean is morally wrong, you’ve got to be immaculately kind and law-abiding now that it’s your turn”, while still obviously holding behind their back the dagger they plan to use as soon as they’re on top again.

Followed by 9 points of reminding stab victims that daggers are dangerous weapons, and one shouldn't swing them recklessly - someone could get hurt!

Disregarding whether or not the broadly painted enemies-of-the-right are in fact going to go right back to swinging daggers the millisecond the cultural headwind blows their way again (although the answer seems intuitive) - what compelling reason, at this point, is there to believe they would not? Does he really think that gracefully turning the other cheek will magically convince anyone on the obviously dominant side that cancel culture is bad actually - or (less charitably) even lead to any, any "are we the baddies" entry-level introspection among those involved at all? Does he expect anyone at all to be reassured by a reminder that daggers can't legally enter your body without your consent? I suppose he really does since from his list only 8) can be read as a psyop attempt and everything else seems to be entirely genuine, but I'll freely admit this mindset is alien to me.