Categories
University

Director for AI Integration clarifies new classroom policy on gen AI

With a new AI policy coming to DLSU next school year, The LaSallian talks to AI Director Dr. Thomas Tiam-Lee for a closer look at its provisions.

“When our students graduate, they’ll be in an industry that uses AI (artificial intelligence), so they need to be prepared.”

That is how Director for AI Integration Dr. Thomas James Tiam-Lee explained the rationale behind the University’s decision to regulate, rather than ban, AI use beginning next academic year.

In early April, the Office of the Provost released DLSU’s official policy on AI use in education. The guidelines specifically address generative AI (gen AI), such as OpenAI’s ChatGPT and Google Gemini, which are described as tools capable of producing human-like content. With their sudden popularity, gen AI tools have sparked debates on their capabilities and the ethical considerations regarding their use.

The University’s AI policy aims to be comprehensive and diplomatic for fair and responsible use of AI in the academe.

Transparency and flexibility

Policy discussions seemed to be afoot as early as September 2023, when the Provost issued an AnimoSpace announcement encouraging the Lasallian community to use AI responsibly and advising faculty to set their expectations.

Now, a formal policy will take effect next term. Tiam-Lee says it is designed to prompt professors to consider “where AI can be productive in the classes they’re teaching and where AI should regulate, or when it should be banned.”

Faculty members will be required to declare the allowed AI usage level for each graded component in their course syllabi. Specific assignments may also vary in permitted AI usage levels depending on instructional needs. Three different usage policy levels are detailed: “Free to Use,” “Allowed in Specific Contexts,” and “Banned” to allow AI use with transparency and accountability.

Violations, such as using AI for an assignment where it is banned or not disclosing AI-generated work even in a Free to Use setting, will constitute academic dishonesty, a major offense under the 2021-2025 Student Handbook.

Even instructors must disclose their use of AI-generated material in learning materials. In one of his data science classes, Tiam-Lee used gen AI to create a histogram for a formative assessment. A disclaimer clarified that while the problem was his, the chart was generated by Claude, an AI chatbot.

The policy also prohibits teachers from solely relying on AI detectors to flag submissions because, as Tiam-Lee explains, these tools are still prone to error. Instead, any suspicion of misconduct will need additional evidence, such as a student’s inability to answer questions about their work or do related exercises in person.

Addressing false positives

The majority of students’ concerns relate to the affordances granted to them by these regulations, as well as the possible measures they may take when falsely accused of AI use.

Lorine David (III, CAM-ADV) recalls an incident in which one of her groupmates was accused of using AI for a theory paper. The professor met with the group privately and gave them “a chance to redeem ourselves and resubmit the work.” 

These concerns were echoed by Alexa Pleyto (I, BSCS-ST). “I do think it’s helpful, but it can also be somewhat unfair to students who didn’t use AI at all but can still get the percentage,” she says, referring to AI detectors.

In response to these sentiments, the Office of the Provost held a forum on May 28 to clarify the policy. A significant number of students raised questions on what constitutes AI assistance and what counts as academic dishonesty. Associate Provost Elenita Garcia, PhD, and Tiam-Lee explained that the determination would depend on the extent and intent of AI use in a particular context.

During the forum, a professor voiced his concern about basing academic dishonesty on a student’s “failure to show accountability or ownership for their work.” He argued that this criterion in the policy must be revised, at least in wording, as it lacks consideration of the student’s inability to understand the requirement itself.

To address student concerns, Tiam-Lee assures that various departments have conducted workshops to help their faculty members understand and integrate the policy. He also highlighted that the University Student Government Legislative Assembly (LA) worked closely on procedures for handling cases of false accusations and faculty complaints related to AI.

LA Majority Floor Leader Ystiphen Dela Cruz, one of the primary authors of the LA bill calling for the establishment of regulations, recounts to The LaSallian how the assembly collaborated with Tiam-Lee throughout the drafting process. “I anticipated potential concerns, particularly around academic dishonesty, and proactively contributed ideas to guide future policy directions.” The LA notably contributed Section 7.2 to the policy, which details the appeals process for academic dishonesty cases.

Open to AI?

Despite the challenges, Tiam-Lee relays that DLSU faculty members have generally been receptive to AI integration and were among the first to call for an official University-wide policy.

“A person with AI is probably better,” remarks Telibert Laoc, a senior professional lecturer at the Department of Political Science and Development Studies. He began exploring gen AI as soon as it gained popularity, even crafting his own classroom policies.

In one of his LCASEAN classes, Laoc assigned students to prompt AI about different issues in Southeast Asian countries, then verify the tool’s responses through research and discussion.

Students express both excitement and hesitation toward the new policy. David commends DLSU for establishing extensive guidelines on gen AI in the academic setting as early as now, but she admits that at the end of the day, “the implementation of the policy will be what ultimately determines its effectiveness.”

Carrie* (III, AB-CAM), meanwhile, is not a fan of using gen AI, but thinks it is essential to have clear parameters for its use in class work. “Regardless of anyone’s opinion, AI is here to stay, and students will continue to use it for whichever purpose as long as there’s no official university policy on how to use it,” she says.

“Generative AI forces us to confront something about our education… if you think about it, it’s not the output that’s important,” Tiam-Lee told The LaSallian in an interview. “What’s important to our students is that they know how to think. They know the journey that led to the final outcome. But if you look at traditional education systems, it’s always the output that’s the highest priority.”

To Laoc, there is still much to consider in terms of AI usage in the academe, but the most pressing issue is, “Did the student surrender his or her agency to artificial intelligence? That’s really the most critical one. Have you given up your thinking?”

*Names with asterisks (*) are pseudonyms.

With reports from Philip Matthew Molina


This article was published in The LaSallian‘s June 2025 issue. To read more, visit bit.ly/TLSJune2025.

Job Lozada

By Job Lozada

Kylie Ortiz

By Kylie Ortiz

Leave a Reply