By using unorthodox "cyclic" strategies—ones that even a beginning human player could detect and defeat—a crafty human can often exploit gaps in a top-level AI's strategy and fool the algorithm into a loss.

preprint of the actual science article summarized in the ars technica piece:

https://arxiv.org/pdf/2406.12843

Prior work found that superhuman Go AIs like KataGo can be defeated by simple adversarial strategies. In this paper, we study if simple defenses can improve KataGo’s worst-case performance. We test three natural defenses: adversarial training on hand-constructed positions, iterated adversarial training, and changing the network architecture. We find that some of these defenses are able to protect against previously discovered attacks. Unfortunately, we also find that none of these defenses are able to withstand adaptive attacks. In particular, we are able to train new adversaries that reliably defeat our defended agents by causing them to blunder in ways humans would not. Our results suggest that building robust AI systems is challenging even in narrow domains such as Go

  • Acute_Engles [he/him, any]
    ·
    edit-2
    1 month ago

    Go is so fun but it's so hard to get anyone into playing with me IRL even when i try smaller boards

    I watched the alphago thing live the first time when it won and it was pretty neat on a "wow! Computer!" Level

    • Palacegalleryratio [he/him]
      ·
      1 month ago

      I’m sure there is an adage I’ve read along the lines of: the hardest part of learning Go is finding someone to play with.

    • context [fae/faer, fae/faer]
      hexagon
      M
      ·
      1 month ago

      yeah it's a time commitment, and especially starting out many people get overwhelmed by the number of options they have to choose from and make a decision. i never got very good, myself.