Turkmen, HazalDikenelli, OguzEraslan, CenkCalli, Mehmet CemOzbek, Suha Sureyya2023-01-122023-01-122022978-1-6654-6845-92575-26342575-26262575-26342575-2626https://doi.org/10.1109/ICHI54592.2022.00117https://hdl.handle.net/11454/7746610th IEEE International Conference on Healthcare Informatics (IEEE ICHI) -- JUN 11-14, 2022 -- Rochester, MNPretrained language models elevated with in-domain corpora show impressive results in biomedicine and clinical NLP tasks in English. However, there is minimal work in low-resource languages. This work introduces the BioBERTurk family, three pretrained models in Turkish for biomedicine. To evaluate models, we also introduce a labeled dataset to classify radiology reports of CT exams. Our first model was initialized from BERTurk and pretrained with biomedical corpus. The second model again continues to pretrain the general BERT model with a corpus of Ph.D. theses on radiology to test the effect of the task-related text. The final model combines radiology and biomedicine corpora with the corpus of BERTurk and pretrained a BERT model from scratch. F-scores of our models in the radiology resort classification are 92.99, 92.75, and 89.49 respectively. As far as we know, this is the first model that evaluates the effect of small size in-domain corpus in pretraining from scratch.en10.1109/ICHI54592.2022.00117info:eu-repo/semantics/closedAccessbiomedicinepretrained language modeltransformertransfer learningradiology reportsDeveloping Pretrained Language Models for Turkish Biomedical DomainConference Object597598WOS:0008641704001052-s2.0-85139013064N/AN/A