
In What Circumstances I Avoid the Use of Artificial Intelligence in International Relations Research and Analysis
In the last blog post, titled “How I Use Artificial Intelligence in International Relations Research and Analysis” I discussed how I integrate AI tools into my work as an International Relations scholar. I outlined practical applications of AI, such as overcoming writer’s block, verifying ideas, finding new sources, data collection and analysis, editing and proofreading papers, and enhancing presentation slides. Also, I refered to the specific tools and methods that I use, including generative AI like Microsoft Copilot, ResearchRabbit, and Microsoft Excel. Finally, I emphasized the importance of using AI to complement, rather than replace, human thinking in International Relations research.
But, considering the plethora of unreliable sources, the geopolitical competition in the field of AI, fake news and government-initiated twisted paradigms that often penetrate the, news, the literature and, ultimately, the final AI product that scholars like me will use for our research, there is a practical and ethical need to draw the line. But, the question is, where should we draw it so that AI works as a facilitator of ethical research rather than an instrument of misinformation, and an instument to be used for dishonest academic submissions? I believe that, as one of the downsides of AI, we will be asking ourselves this question consistently in the future, as there is normally no black and white in ethics. The same tool that can help a lazy student turn in an essay in a matter of hours can assist a hard-working student with learning difficulties to put their ideas in some order. So, this post is about how I keep myself safe from the ethical and practical perils of AI.
Avoid creating fake videos and pictures for misinformation purposes. Now, this is obvious. Fake news and misinformation consist a major problem to the work of an International Relations scholar. I don’t need do be part of it. Furthermore, by using AI for such purposes undermines my credibility. Why should I do that?
Avoid blindly copying/pasting CHAT-GPT results. CHAT-GPT can help with a lot of things. It can write summaries, blog posts, keyword lists, as well as synthesise explanatory texts with the use of a variety of sources. It can also help with fetching information that I find hard to find myself. But blindly copy-pasting results returned by the AI tool of my choice isn’t a good idea, for a number of reasons:
- Inaproppriate sources or no sources at all: One can’t just trust whatever the AI tool presents them with. It is important that sources are provided and that they are reliable. By fact-checking the answers I get, not only do I ensure that what I claim is based on solid evidence, but also I get further inspiration for my projects. In the images below, you can see that I asked Microsoft Copilot whether military aid is an opportunity for the donor state. At first, I simply posed the question, and, as you see, the tool didn’t provide me with sources that can support the answer it gave me. Whereas, the second time around, I asked the same question and requested that the answer should be backed up with academic and government sources. As you see, Copilot returned an answer based on such sources, which I can rely on for my research and analysis:


- Lack of context: AI tools can provide me with answers, but it doesn’t mean that they are the right answers for a specific project I have to complete. For example, in regards to a publication, the text provided by the AI tool cannot correspond to the particular style that a specific publisher wants. In terms of academic assignments, for which I have to take into consideration what has been said in class and what’s on my reading list, an AI tool just can’t handle all those parameters. As a result, there is a great risk of my text to be out of context. And that’s how publishers often know that authors have taken the AI shortcut…
- It undermines learning and the development of critical thinking: To become an International Relations scholar, I have invested quite a lot of money and effort to obtain knowledge and skills. I have grown them the same way in which athletes have grown their muscles. Just like like muscles that are not worked anymore, my knowledge and skills will start to decline drastically if I begin relying solely on AI. Isn’t that a shame?
Avoid submitting AI-generated academic assignments. For many, this is obvious. But for many others, it isn’t. Whether the professor will accept the assignment is an other issue; but it shows when an essay is AI-generated. It is in the title and the structure of the text. From the title to little details, the experienced eye can easily spot signs of an AI-generated essay. If I’m stuck, I prefer to submit prompts to the AI tool for ideas and consulting directly with my professor than undermine my academic integrity by submitting something that has been entirely generated by a machine.
To conclude, AI has already begun to revolutionize our work in International Relations, a development occuring in all scientific fields. However, due to the multitude of grey zones included in IR, it is imperative to use this technology with caution and integrity. By taking into consideration the possibilities, limits and ethical concerns of the use of artificial intelligence, we can use it for our day-to-day tasks in the field without compromising our principles and without breaking any rules. It is our responsibility as scholars and professionals to ensure that artificial intelligence is used to facilitate ethical research instead of undermining academic integrity and growing the issue of misinformation and propaganda. Certainly, the possibilities of AI will keep growing with time; but instead of worrying that it will change things for the worse, each of us must remain vigilant in upholding the standards of our scientific field and promoting a culture of critical thinking and responsible progress.
Relevant Literature
- Meleouni, Christina & Efthymiou, Iris Panagiota, “Artificial Intelligence and Its Impact in International Relations”, Journal of Politics and Ethics in New Technologies and AI, Vol. 2, No. 1 (November 2023), pp.1-12, DOI: https://doi.org/10.12681/jpentai.35803 (Open Access)
- Kissinger, Henry A. & Schmitt, Eric & Mundie, Craig, “Genesis: Artificial Intelligence, Hope, and the Human Spirit”, Little, Brown and Company, 2024. Available at: https://amzn.to/4gnMNM1
- Mitchell, Melanie, “Artificial Intelligence: A Guide for Thinking Humans”, Picador, 2020. Available at: https://amzn.to/41igcTF
- Mollick, Ethan, “Co-Intelligence: Living and Working with AI”, Portfolio, 2024. Available at: https://amzn.to/4fYJQBZ
- Togelius, Julian, “Artificial General Intelligence”, The MIT Press, 2024. Available at: https://amzn.to/4il6JRx
- Kapur, Rajeer, “AI Made Simple: A Beginner’s Guide to Generative Intelligence”, Rinity, 2024. Available at: https://amzn.to/3Bktp3D
- Miller, Michael R., “Using Artificial Intelligence: Absolute Beginner’s Guide”, Que Publishing, 2024. Available at: https://amzn.to/49pGtkN
- Roumate, Fatima (ed.), “Artificial Intelligence in Higher Education and Scientific Research: Future Development“, Springer, 2023. Available at: https://amzn.to/3F6v0M6

