As a triage and screening tool, AI could theoretically reduce the pressure on the medical system and allocate resources to patients who need medical help the most.1 AI could be used as a replacement for tasks that are less complex but time intensive and labour intensive, allowing health workers to tackle more complex tasks. But the problem is of how to increase the trust of health workers and patients in AI. On the one hand, this problem involves the accuracy of AI’s data analysis; on the other hand, it is also related to ethics. For the accuracy of data analysis, a larger and more comprehensive database needs to be established, which is something that technicians need to solve. But, ethically, who is responsible for the errors made by AI? On the other hand, is it not good for the harmonious development of the doctor–patient relationship to use a lot of tools to replace the labour in clinical work?
- © British Journal of General Practice 2019