A post about it that's in the middle of a Bluesky thread - https://subium.com/profile/figgityfigs.bsky.social/post/3kuk2hjgo3k2x

The start of the thread - https://subium.com/profile/figgityfigs.bsky.social/post/3kujzuo6shk26

  • BodyBySisyphus [he/him]
    ·
    20 days ago

    Hear me out, so what we do is this:

    • Humans are naturally fallible and even smart ones are prone to snap judgment on partial or biased information or emotional entanglement
    • So let's create a decisionmaking AI - we can teach it how to govern fairly and it will make the best decisions
    • But wait, any AI we create is naturally going to be programmed with our limited point of view in mind and may end up making a weird choice due to a programming flaw or edge case it wasn't trained to handle
    • So what we do is create two more AIs, with slightly different parameterizations and randomized training scenarios
    • The three AIs will act as checks and balances on one another
    • We can even try to embody different aspects of how humans approach problems and make decisions, using something like Jungian archetypes to help choose among difficult tradeoffs
    • Everything will be fine
    spoiler

    unless
    angel-biblical-shh