Description
This guide addresses a practical problem that has emerged faster than most
students and staff can adapt: generative AI systems are widely available, yet
academic standards still require that assessment and research reflect a student’s
own understanding, judgment, and scholarly responsibility.
The value of this book lies in its emphasis on process rather than shortcuts. It
treats AI as a tool that can support learning and research when students verify
claims, remain transparent about assistance received, and keep interpretive and
ethical decisions firmly under human control. These principles matter for
undergraduate coursework and are even more consequential for postgraduate
work, where research credibility depends on traceable reasoning, careful
sourcing, and responsible handling of evidence.
I recommend this guide to students who want to use AI without undermining
their learning and to supervisors and lecturers who need a common language for
setting expectations. The habits it promotes (verification, documentation, and
accountable authorship) are not merely compliance measures; they are core
scholarly skills that remain valuable regardless of how AI tools evolve

Reviews
There are no reviews yet.