- cross-posted to:
- technology
- cross-posted to:
- technology
Im curious how each agent differs, or is trained. Seems they had doctor and nurse agents, as well as patient agents. This would be a good way to start partial implementation. It would allow some tasks to be taken over by the in a hybrid format which could allow an even richer training environment.
I could never see the west doing this in a way that would actually improve the quality of service.
One of the issues with LLM AIs that we've seen time and again is that it can be extremely confident and perfectly incorrect. I have no doubt they are doing their best to train the AI with the best data, but I hope they are also working to solve some of the underlying issues with LLMs.
The correctness issue is the most tricky one I suspect, although I imagine quality of training data plays a huge role there. One of the problems with commercial western LLMs is that they just throw garbage they scrape off the internet at them indiscriminately. However, if you trained it specifically on quality medical data that's been curated, that would be a very different story. The other thing you can do is tag the dataset with metadata references to the actual studies, or cases, so that when the answer is produced it can be matched against that.
Ultimately, this isn't fundamentally different from what a human doctor does. They learn how to correlate symptoms with common ailments, and then use their experience to make a call on what the problem might be. Then the patient undergoes testing to confirm that's the issue for more serious cases. So, I can see a similar process being done with LLMs where they can come up with the most likely explanation for the symptoms provided, and they might even be able to do a better job, since they work with vastly more data than a human can. And this can act as a way to focus further investigation. I would imagine you'd still want the human in the loop, but you could save a lot of time doing the initial assessment this way.