Letter
This too shall pass: the performance of ChatGPT-3.5, ChatGPT-4 and New Bing in an Australian medical licensing examination
First published: 01 August 2023
No abstract is available for this article.
Filename |
Description |
mja252061-sup-0001-supinfo.pdfPDF document, 219 KB |
Supplementary table
|
Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
- 1Nori H, King N, McKinney SM, et al. Capabilities of GPT-4 on medical challenge problems [preprint]. arXiv 230313375; 20 Mar 2023. https://doi.org/10.48550/arXiv.2303.13375 (viewed Mar 2023).
- 2 OpenAI. GPT-4 technical report [preprint]. ArXiv 2303.08774; 15 Mar 2023. https://doi.org/10.48550/arXiv.2303.08774 (viewed Mar 2023).
- 3 Australian Medical Council Limited. MCQ trial examination [website]. Canberra: AMC, 2022. https://www.amc.org.au/assessment/mcq/mcq-trial/ (viewed Mar 2023).
- 4Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health 2023; 2: e0000198.
- 5Gilson A, Safranek CW, Huang T, et al. How does ChatGPT perform on the United States Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment. JMIR Med Educ 2023; 9: e45312.