It looks like one of the hottest topics—if not the hottest topic—for educational conferences in 2023 is going to be how to handle proposals generated by artificial intelligence (AI) tools, like ChatGPT.
While it is tempting for conference organizers to implement outright bans on AI-generated papers, such bans would not only be hard to enforce—especially as AI-generated text becomes increasingly difficult to detect—they might actually do more harm than good in some cases, like for authors who need to submit papers written in a non-familiar language.
James Vincent over at The Verge has written an excellent piece on the complicated issues facing conference organizers and bans on AI-generated content. (Interestingly, the article focuses on the decision by one of the most prestigious machine-learning conferences to ban all papers generated by machine-learning systems.)
Is your conference ready for AI-generated papers? Share your thoughts below!
UPDATE: So it looks like an industrious computer-science student has already developed software that can detect whether text is generated using ChatGPT. In addition, the company that developed ChatGPT, OpenAI, has said it is not only working on a feature that adds a watermark to the output but also its own tool for detecting ChatGPT-generated text.