The Office of the Provost spearheaded an online forum last May 28 to discuss the University’s new policy on the use of generative artificial intelligence (gen AI), which is set to take effect in Term 1, Academic Year 2025-2026.
Convening the Lasallian community, the forum simplified the provisions of the policy to establish a more “comprehensive” understanding of the guidelines for both students and professors as part of DLSU’s effort to embrace technological developments and adopt AI tools for academic use.

What is covered by the policy
The forum was primarily led by Dr. Thomas James Tiam-Lee, director for AI integration under the Office of the Provost. He began by clarifying the distinction between gen AI and AI. While the latter broadly encompasses systems or programs that display a certain aspect of intelligence, such as grammar checkers, spreadsheet formulas, and physics simulators, gen AI refers to programs that can produce human-like content, namely text, images, and videos.
“The general rule is that if the tool outputs free-form, human-like content and the response varies every time you run it, it is likely to be generative AI,” he explained.
Tiam-Lee then emphasized that the provisions of the policy strictly apply to gen AI and not traditional AI tools in general, which have already been widely accepted and normalized within the academic setting. He cautioned, though, that the line between the two is becoming increasingly blurred, as more and more applications are beginning to incorporate gen AI features.
Under the new policy, students will now be required to include written disclosure statements when gen AI is used in the production of any material. These statements must indicate which tool was used, how it contributed to the work, and to what extent.
In turn, instructors are required to specify the acceptable level of gen AI used in their class—whether it is free to use, allowed in certain contexts, or completely banned—either through the course syllabus or through separate instructions for specific course requirements. This is to give instructors the discretion to contextualize the use of gen AI depending on the learning objectives of the course.
Regarding the use of AI detector tools for identifying academic dishonesty among students, Tiam-Lee stated that the faculty members “should not use it as a sole basis” because there is a “need to look for additional pieces of evidence.”
Traversing the future with gen AI
While the University opens its doors to AI, Tiam-Lee acknowledged the technology’s underlying environmental effects, noting that “embracing generative AI in the University does not necessarily mean that we are being complicit with all of these negative issues.”
He encouraged the participants to take advantage of what they learn in their education to steer the course of AI in the right direction. “We need to place importance on educating ourselves about those issues so that we can promote active participation and citizenship in future conversations,” he added.
He also pointed out how cheating for school work has long been an issue, and the availability of gen AI for public consumption, especially for the academic setting, has become an additional resource for doing the dishonest act.
“Gen AI is just amplifying the problem because it makes it easier for them to do that, right? But now, we are forced to confront…some problems that have already plagued our education system,” Tiam-Lee conveyed.
As AI continues to be integrated into society, he urged everyone to “make sure that all uses of gen AI are aligned with the learning objectives” for different academic purposes and avoid taking a “dismissive stance” on its use because it is a time in history to take into consideration AI’s development and “eventually adapt to it.”
Addressing cracks in the policy
During the event’s open forum, participants raised several concerns and challenges that may hinder the effectiveness of the gen AI policy.
One of the prominent issues is the policy’s decentralized nature, which some attendees feared would lead to inconsistency across departments and professors in implementing the policy. Tiam-Lee responded that “It’s very difficult to provide a universal list because it will vary depending on the context of the course.”
Faculty members also questioned the reliance on self-disclosure. One of them asked anonymously, “How do we know if the students used AI? How can we check if there was no disclosure made?” Tiam-Lee admitted that “there’s no easy way to do it.”
Taking into consideration the feedback received from the event’s participants, Provost Dr. Robert Roleda concluded the forum, echoing Tiam-Lee’s acceptance of gen AI to the academe: “There were many interesting questions, and I think we need all of those to really push the boundary of our understanding of gen AI and how we can use this as part of our educational process. I don’t think we can bury our heads in the sand and not use AI at all. It’s there.”
