Artificial Intelligence 9 min read

How to Use Large Language Models Ethically in Math Modeling Contests

COMAP’s new policy outlines why and how teams in mathematical modeling competitions should responsibly employ large language models and generative AI, detailing guiding principles, risks, citation requirements, and ethical considerations to ensure fairness, transparency, and academic integrity.

Model Perspective
Model Perspective
Model Perspective
How to Use Large Language Models Ethically in Math Modeling Contests
How should large language models (LLMs) be responsibly used in mathematical modeling contests, and how should their outputs be cited? These questions were not addressed in official rules until COMAP recently issued clear guidance for the upcoming HiMCM competition.

In today’s increasingly digital world, LLMs and generative AI tools are becoming practical instruments for research and competitions. They excel at organizing ideas, generating drafts, rewriting, and polishing language, but their convenience also brings a series of problems and challenges. COMAP’s latest contest policy provides explicit guidance and regulations for using these high‑tech tools.

Policy Background and Purpose

Because LLM and generative AI technologies continue to evolve and proliferate, COMAP has adopted a more transparent and directive approach to managing their use in mathematical modeling contests. The policy aims to ensure that every aspect of student work— from model research and development (including code creation) to written report preparation— occurs in a fair and transparent environment. COMAP commits to updating and refining the policy as the technology advances.

Guiding Principles for Using AI Tools

While solving problems does not require AI tools, COMAP acknowledges their value as productivity aids. They can help teams prepare submission materials, generate initial structural ideas, or perform summarizing, rewriting, and language polishing. However, certain modeling tasks rely heavily on human creativity and teamwork; over‑reliance on AI may pose risks. COMAP therefore advises caution when using AI for model selection and construction, assisted code creation, interpretation of model data and results, and drawing scientific conclusions.

Risks and Limitations of Using LLMs

Objectivity concerns: LLM‑generated content may contain racial, gender, or other biases, and some important viewpoints might be under‑represented.

Accuracy challenges: LLMs can produce fluent but scientifically illogical statements, and may fabricate references or make errors on complex or ambiguous topics.

Contextual understanding limits: LLMs cannot apply human contextual insight, leading to potential misunderstandings.

Dependence on training data: Performance relies on large, high‑quality datasets, which may be scarce for certain domains or languages.

Team Conduct Guidelines

COMAP stresses that teams must maintain openness and honesty when using AI tools; greater transparency increases trust and proper usage. Without clear citation and role description, AI‑generated content may be deemed plagiarism and result in disqualification.

Teams must explicitly state which LLMs or AI tools were used, specify the model and its purpose, verify the accuracy and appropriateness of generated content, correct any errors or inconsistencies, provide proper citations, and remain vigilant against potential plagiarism.

Citation Format Example

<code>OpenAI ChatGPT (Nov 5, 2023 version)
Query1: <insert the exact wording you input into the AI tool>
Output: <insert the complete output from the AI tool>
Query2: <insert the exact wording of any subsequent input into the AI tool>
Output2: <insert the complete output from the second query>
</code>
<code>Github CoPilot (Feb 3, 2024 version)
Query1: <insert the exact wording you input into the AI tool>
Output: <insert the complete output from the AI tool>
</code>
<code>Google Bard (Feb 2, 2024 version)
Query: <insert the exact wording of your query>
Output: <insert the complete output from the AI tool>
</code>

Reflections on Using New Tools

The emergence of LLMs undeniably opens a new era of knowledge acquisition. Much like the advent of search engines, LLMs provide a novel way to obtain and generate information. When search engines first appeared, people debated whether they were treasure troves of knowledge or impediments to memory and learning; today the debate resurfaces with LLMs as the protagonists.

Just as a sharp knife can prepare food or cause harm, LLMs are a double‑edged sword. They can boost efficiency, creativity, and productivity, yet they also risk plagiarism, misinformation, and misunderstanding. Balancing benefits and drawbacks to achieve a positive impact in education and competition requires deep consideration.

In an age of information overload, maintaining clear, critical thinking is essential. Even when using powerful tools like LLMs, we should not blindly accept everything they generate. We must learn to leverage these tools while preserving independent thought, creativity, and critical analysis.

Widespread LLM adoption also raises ethical questions: Are we over‑relying on these tools? Do they deprive us of learning and exploration opportunities? Have we lost genuine communication and collaboration?

COMAP’s policy on LLM usage is a timely and necessary step. It offers clear direction for student teams, mentors, and judges, encouraging participants to harness advanced tools without neglecting human creativity and teamwork.

All competing teams are strongly urged to strictly follow COMAP’s LLM policy to avoid severe consequences such as score cancellation or disqualification due to improper citation.

Reference: COMAP. (2023). Use of Large Language Models and Generative AI Tools in COMAP Contests. COMAP Contest. https://www.contest.comap.com/undergraduate/contests/mcm/flyer/Contest_AI_Policy.pdf

LLMGenerative AIAI policyAcademic integritymathematical modeling
Model Perspective
Written by

Model Perspective

Insights, knowledge, and enjoyment from a mathematical modeling researcher and educator. Hosted by Haihua Wang, a modeling instructor and author of "Clever Use of Chat for Mathematical Modeling", "Modeling: The Mathematics of Thinking", "Mathematical Modeling Practice: A Hands‑On Guide to Competitions", and co‑author of "Mathematical Modeling: Teaching Design and Cases".

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.