
As generative artificial intelligence (AI) tools become mainstream, efforts to detect AI-driven plagiarism and academic dishonesty are indispensable. For instance, some courses require students to submit their work through AI-checkers such as Turnitin. Others instruct submission of additional evidence, like revision histories, chat logs, or live videos of students doing the work. Ironically, this heightened vigilance facing AI use can do more harm than good, raising concerns about the fairness of student assessment and conducive learning environments.
Oblivious to the potential inaccuracies and misleading nature of generative AI and AI detectors, educators and students often treat them with definitive judgment. In response, DLSU Director of AI Integration Dr. Thomas Tiam-Lee issued a reminder in May 2024 on the limitations of AI detector tools. Yet, much exposition awaits the incumbent director regarding the future of AI use at the University.
Approaching the underlying issue
DLSU generally has an open outlook on AI, encouraging its use as an opportunity for innovation in pedagogy and learning. However, the University is still in the early stages of maximizing the technology for education, raising a degree of prevalent AI illiteracy in the academic community. As a result, issues like plagiarism accusations, academic dishonesty, and unbalanced pedagogical practices persist as a pervasive dilemma.
Generative AI detectors typically analyze a text’s complexity and sentence variation as AI-generated writing tends to be simpler and conventional due to its predictive nature. Since the latter can resemble personal writing styles, tools such as Turnitin and GPTZero should be used cautiously. Misunderstanding the nature of AI checkers risks compromising effective learning contexts with dismissive behavior and malpractice toward AI.
In this regard, Dr. Tiam-Lee made it one of his initial priorities to remind the Lasallian community of the guiding principles of AI use and the tools’ tendency to generate misleading results, such as “false positives and false negatives.” The former indicates human work tagged as AI-generated, while the latter defines AI-generated work tagged as human work.
Recognizing this, Dr. Tiam-Lee held a university-wide forum last May 28 on DLSU’s new policy on the use of generative AI. The initiative was built on surveys from the academic community, faculty discussion during AI integration workshops, and consultations with the Student Discipline Formation Office and the University Student Government Legislative Assembly. Set to take effect in AY 2025-2026, the policy aims to provide sound guidelines on the use of generative AI in our academic system, wherein usage policies are included in all course syllabi.
Integration takes collaboration
Dr. Tiam-Lee’s directorship is a step forward for Lasallian educational innovation, being responsible for offering assistance on integrating AI for teaching methods and curriculum design across the University. “My vision is to ensure that the University has an aligned perspective on what AI is and how we view this kind of technology in terms of how it can evolve and revolutionize the way we teach our students,” he expressed.
Moreover, he stresses the need for collaborative effort between learners and educators on their AI use, citing transparency and the ability to discern between the nuances of AI, “The important thing always is [that] we are transparent on what part of it is AI assisted…we need to make sure that for both the student side and the teacher side…[and] we do not lose the (human) agency.”
Still, it will surely take some time for the University to adjust accordingly. Traditional classroom dynamics and practices we have grown accustomed to will be subjected to immense change and, in turn, scrutiny. Not only that, as AI continues to rapidly evolve, devising foolproof methods to perfectly distinguish between human and AI-generated work, especially when no disclosures are made, will become virtually impossible.
The responsibility still remains on the individual to act critically with the question “How do we determine human ownership?” in mind. With this, Dr. Tiam-Lee, in the recent forum, recommends asking questions and assessing how one presents and argues for their work in gauging ownership.
The ramifications of educational AI use
Considering technology’s expeditious development, AI’s impact will undeniably transform the educational landscape, with tools like ChatGPT and Gemini becoming second-nature supplements for self-instruction. Nevertheless, AI’s successful adoption relies upon institutional policies and formal guidance to fully embrace it as a stepping stone to bolster learning outcomes and methods.
The traditional means of output-based assessment can only go so far with the integration of AI, as the technology is more suited for process-based assessment. Dr. Tiam-Lee conveys the need for a paradigm shift to properly take advantage of the technique: “I’m also encouraging teachers that we need to somehow shift the focus of our assessments rather than putting so much emphasis on the final outcome. We need to focus more…on how our students are able to come up with the final output, because that’s the evidence of [their] skill.”
Integrity, clear purpose, and the promotion of AI literacy must be at the forefront of AI integration in student-centered education. These technologies do not exist to diminish learning outcomes or replace the role of the educator, but instead augment and further their endeavors. Indeed, AI’s potential is both terrifying and exciting; it is a sensation that further demands the continued evaluation and balancing of its risks and benefits. Therefore, it is up to us to face the burden of ascertaining what is “artificial.”
This article was published in The LaSallian‘s June 2025 issue. To read more, visit bit.ly/TLSJune2025.
