A student sits down at her computer and drops a Common App essay prompt into ChatGPT. Four seconds later, she has 650 words. The grammar is clean. The essay has a logical structure. There are plausible examples. But when she reads it, she feels nothing.
“Is this really mine?” she asks herself.
She gives the AI a few more details about her life and asks it to try again. Four seconds later, 650 more words. Now the essay seems to be about her. It's describing real things from her life. It looks like a college essay. But the feeling is still wrong.
“This… this doesn't sound like me,” she says.
The stakes are high. She knows getting into college can make a real difference in her life. And the pressure is growing. The deadlines are not far away and all of her friends have already submitted their applications. They have been meeting with expensive admissions counselors and essay writing tutors. They have had lots of help.
But she has to do it alone. Her parents have been too busy to help. They can't afford extra tutors or admissions specialists. At school, her guidance counselor has so many students on the wait list that it takes weeks to get an appointment. When she finally does meet the counselor, she feels anonymous, like just another name on a list.
Maybe getting a little help from AI is not exactly cheating, she tells herself. Who will know, anyway?
The story is not far-fetched. I know it because I heard similar ones when I asked my college students whether they had tried using AI to write their papers, and why they had tried it. They faced different kinds of pressure. They worried about grades, academic achievement, balancing heavy courseloads. But the essentials were the same.
Writing is difficult. Their situations were complicated. AI seemed to offer an easy solution.
The conversation about AI and student writing is usually framed around cheating and detection. Did the student use AI? Can we catch them? Should they be punished?
Those questions matter, but they miss something fundamental about why we assign student writing. Essays like the college application essay or end-of-semester term papers are thinking exercises. Professors want to see how students work through course materials and arrive at their own conclusions. Admissions officers want to see how an applicant reflects, what they notice, what matters to them. Using AI to write the essay is easier because it skips all the hard work of thinking. The product appears with no process.
When you understand that, it's easy to be against AI. But totally rejecting the technology also ignores some fundamental inequalities: not everyone has the time or support necessary to do that hard work of thinking through an essay. Some students have more complicated situations than others, with fewer resources to help them. Maybe, in those circumstances, there is a way to use AI responsibly, we might ask?
This guide is for families trying to figure out where the line is. It covers what admissions officers actually look for, what the detection tools can and cannot do, and three rules that keep AI use ethical and productive.
The AI Landscape in College Admissions (2026)
Hate it as much as you want, nobody, especially not anyone involved in higher education, can ignore AI. Half of college admissions officers say AI strips essays of their personal touch. In place of a real window into how students think, essays produced by AI are generic, boring, without any authentic voice. Yet on the other side, roughly one in three college applicants used AI tools during the 2023-24 cycle, according to Foundry10. We can assume that number is significantly higher now.
About half of that usage fell into categories a number of schools consider acceptable, like brainstorming and grammar polishing. The other half crossed further into structuring, drafting, and editing. The Common App has responded by updating its fraud definition to include “intentionally misrepresenting as one's own original work the substantive content or output of an artificial intelligence platform, technology, or algorithm.”
That sounds straightforward, but the overall the picture is not so clear. Only about 30% of universities have explicit AI policies. A few of them are strict. Caltech now requires Fall 2026 applicants to read its “Ethical Use of AI” guidelines and prohibits any generation of essay text via AI, stating it may deny or rescind admission for violations. Brown University prohibits applicants from using AI to generate “any substantive part of their written materials,” allowing only spelling or grammar help. Many others promote guidelines that permit AI for brainstorming and editing, in some cases. But the overall story is one of disjointed reactions to the new technology, universities struggling to find a clear position, and students left to navigate the new terrain without a clear map.
On the enforcement side, the most common tool is Turnitin's AI detection feature, which is already integrated into a number of institutional plagiarism-checking workflows. But a quick scan of Turnitin-related threads on Reddit will reveal the confusion and consternation that the tool causes among students, who often complain of being accused unjustly.
Alongside the complaints, you will find recommendations for a growing industry of “AI humanizer” tools that promise to make AI-generated text undetectable. Students can paste AI-written text into these tools, which will rephrase sentences, vary structure, and add “human-sounding” imperfections.
The result is a system that often punishes the wrong people. Honest students who write their own essays risk being flagged by imperfect detection software. And they know it. Another pressure to add to the many others in their lives: surveillance and false accusation. Meanwhile, students who use AI and then run the output through a humanizer often pass undetected. All the fancy new AI tools, surprise surprise, only exacerbate the problem.
The Detection Illusion
AI detection is not reliable and families should understand why.
Turnitin claims a false positive rate of less than 1% of documents, but there are caveats. Down at the sentence level, that false positive rate goes up to roughly 4%. That means that about 1 of every 25 human-written sentences gets incorrectly flagged. That is why the software so often responds with unhelpful conclusions about the percentage of the essay likely created with AI, even if none of it was.
Those are Turnitin's own numbers. Independent research tells a different story. A 2023 Washington Post investigation by Geoffrey Fowler found Turnitin misclassified more than half of a sample of human-written essays as AI-generated. For specific populations, the picture is worse. Non-native English speakers and students who write in a formal register get flagged at significantly higher rates.
Why? Because AI works with pattern recognition. The AI detection tools look for statistical patterns in word choice and sentence length and for predictable structures. Students who are trained to write in formal structures, who use sophisticated vocabulary, or who write in an academic style, they produce texts that look, statistically, when reviewed by AI tools, more like AI output.
Imagine the student I described at the beginning of this article. Let's say she does the right thing and deletes the ChatGPT essay. She trusts herself to do a good job, trusts all the work that she has done in her English classes to become a better writer. She's under pressure, but that's just a chance to prove herself. “When the going gets tough,” she tells herself.
She works on her essay for weeks, right up to the deadline. It's good. It came out better than she expected. She put everything into it. And best of all, she knows that it's really hers.
She shows it to her guidance counselor, the one who hardly had time to meet with her, and the counselor runs it through an AI checker. False positive. Now she has to prove she wrote something that she wrote. There are no drafts she can show. Nobody helped her who can speak up on her behalf. All she can do is swear that it's her own work.
There's the core problem right there. Even if detection accuracy improves, asking “Did AI write this?” is the wrong question. The question should be “Can you show your work?” Documentation, not detection.
The False Positive Problem (and Why It Hits Some Students Harder)
The false positive problem deserves more discussion because the impact is so unfair.
A Stanford study found that AI detection tools flag writing by non-native English speakers as AI-generated at a rate of 61.3%. Learning another language, as those of us who have done it can tell you, is really really hard. It's not just all the new vocabulary you have to memorize. You have to learn to think and express yourself in new grammatical structures. You memorize them, internalize them, and, if you work hard and stick with it for long enough, eventually you reproduce them naturally.
Learning to write in a second language? You can't get away with having a good accent and getting your point across. You have to use those grammatical structures correctly, on paper. Of course, you are more likely to rely on formal patterns. You are more likely to use classic transitions, received phrases, and structured arguments. Unfortunately, these are the hallmarks of what Turnitin thinks a machine sounds like.
Some institutions have turned against detection software. Vanderbilt University disabled Turnitin's AI detector entirely, saying the tool gives “no detailed information as to how it determines if writing is AI-generated.” Other schools have turned AI detection into a manual review step, where a flag triggers follow-up human investigation.
GradPilot, a college admissions research platform, has started tracking what they call “flagxiety,” the anxiety that some students feel about being falsely accused of AI use. Horrible neologism that may be, but the feeling is real. It's changing how students write. Everyone knows to avoid overusing em-dashes. But some students are deliberately introducing errors into their prose or avoiding vocabulary that seems too advanced.
It's a bit of a pedagogical disaster. Fear of getting falsely accused is leading students to write worse on purpose, and the emphasis on detection is undermining exactly what the tools are supposed to protect.
What Admissions Officers Actually Read For
Talking to admissions officers helps.
Despite student fears, they are not actually feeding application essays into some scary robot mouth, one after another, on a conveyor belt, until the robot's horrible red eyes light up and a “Detected!” flag shoots out of the top of its head.
What they are looking for actually makes a lot of sense. Admissions officers read thousands of essays every cycle. When you read a lot of student writing, trust me, you get a feel for it. They can spot a generic essay in the first paragraph, probably in the first sentence. The essays that stand out, on the other hand, share specific qualities.
Those essays have a real voice. By that I mean that you can hear the particular person when reading them, almost as if they were speaking. Those essays also have specific details: things that the student actually experienced or noticed, which make the essay feel real. (Literary critics can talk about the relationship between details and Realism ad infinitum, but this is not the place.) And those essays show reflection, by which I mean, the student reveals they have thought about and articulated what an experience meant to them.
These criteria are beyond the reach of AI. Essays written with AI lack specificity, because AI doesn't have specific experiences. They lack voice, because AI writes in a statistical average of all the writing it has processed. And they lack genuine reflection because AI cannot reflect, cannot have experiences to reflect on. An experienced admissions reader can identify an AI-generated essay without needing unreliable detection software or a scary, essay-devouring robot monster with fiery red eyes.
I spent ten years teaching college writing. I have read thousands of student essays. The ones written by real students have a quality that is hard to define and immediately recognizable. Sometimes they are rough, sometimes they are finely polished, but there is always a sense of someone thinking, sometimes with difficulty, through a question, a problem, or a story. Often there are moments of discovery, when the student finds something they didn't expect. AI text does not have those moments. It is consistently competent and consistently flat.
AI That Writes vs. AI That Coaches
The way people talk about AI and student writing tends to collapse an important distinction. There is a difference between AI that writes and AI that coaches. Often we talk about using AI for brainstorming versus for drafting, but I think the word “coaching” is more useful.
AI that writes is when you enter a prompt and the AI generates the essay text. You give it a topic, maybe some pointers, and you get back sentences, paragraphs, a complete essay. Here, AI is doing the thinking. All the student does is copy-and-paste and submit (or edit a little and submit; or run it through a humanizer and submit).
AI that coaches does something different. It asks a question to get the writing moving. It notices when an argument has a gap. It points out that the most interesting sentence in the essay draft might be the one buried in paragraph four. Here, the student does all of the thinking and all of the writing.
The distinction is not just an ethical AI policy or an honor code. It is architectural, built into the structure of the process. A coaching tool that is built correctly can't generate essay text even if the student asks for it. A coaching tool can respond to what the student has written, ask questions about it, suggest areas for development, and provide feedback. But it can't produce the words that end up on the page. It's a little like the difference between a calculator that solves the math problem for you and a tutor who sits beside you, asking, “What do you think the next step is?”
(For a closer look at how this architectural distinction works in practice, see our page on how ethical AI coaching works.)
Three Rules for Using AI Ethically
After spending years thinking about this as both a writing professor and someone building AI coaching tools, I've come up with three rules to help draw a clear line between ethical AI use and academic dishonesty. They are simple and they apply to every situation.
Rule 1: The student can explain every sentence. If there is a sentence in the essay that the student did not write and cannot explain, that sentence should not be there. This applies to AI-generated text, to sentences written by a parent who was “just helping,” and to edits made by a private college consultant. The standard is not where the words came from. The standard is whether the student owns the thinking behind them.
Rule 2: The process is documentable. Could the student walk someone through how the essay was written? Not in vague terms (“I brainstormed and then I wrote it”) but in specific ones. Here is the freewrite where the idea first appeared. Here is the outline that shaped the structure. Here is the first draft and here is what changed in revision and why. A process that can be documented is a process that was real.
Rule 3: The AI made the student think harder, not work less. This is the test that separates coaching from ghostwriting. After using an AI tool, did the student understand the topic better? Did they see something in their own writing they hadn't noticed? Did the AI push them to dig deeper, be more specific, reconsider an assumption? If the AI made the work easier without making the thinking deeper, it was not coaching.
These three rules should hold up regardless of what tool is used, what school the student is applying to, or how AI detection technology might evolve. They are probably applicable to any kind of essay help, even human assistance. That's because they are grounded in what the essay is supposed to be and to do. It is a demonstration of how the student thinks. When the student does the thinking, the essay is authentic.
Why Documentation Beats Detection
AI detection tries to answer the question “Did AI write this?”
As I described above, it's the wrong approach. All the detection tools produce false positives. They disproportionately flag certain populations. And they are in a sort of tech arms race with the so-called “humanizer” tools that are designed to evade them.
Process documentation takes a more sensible approach. Instead of policing the finished product, it records the journey that produced it. Writing is a process, and you can trace it. Every brainstorming session, every freewrite, every outline, every draft, every revision, every AI coaching interaction. The entire journey, timestamped and verifiable.
When a student has a complete record of their writing process, the question “Did you use AI?” becomes irrelevant. The evidence speaks for itself. Here is where the idea first appeared. Here is how it developed through three drafts. Here is the feedback the student received and how they responded to it. Here is the moment in the fourth draft when the student threw out the opening paragraph and wrote something completely different because they realized the essay was really about something else.
That kind of evidence defends itself. An AI can generate a polished essay in seconds, but not a believable record of messy, recursive, genuinely human writing development over weeks.
This is the direction I believe the debate on AI and student writing, in admissions and across education, is heading. My vision of AI coaching aligns with 50 years of writing research that has always valued process over product. As AI-generated text becomes more sophisticated and harder to detect, schools will increasingly ask students to show their work rather than simply submitting a finished product. The students who manage this transition best will be the ones who documented their process from the beginning.
This is why we built the Writer's Journey Report at Profsy. It records every step of the writing process, from first freewrite to final draft, and produces a verifiable document that the student can submit alongside their essay. Learn more about how our writing process works.
What NACAC, MLA-CCCC, and the Common App Say
Three major organizations have weighed in on AI and student writing. Their perspectives come together on the same principles, and families should know what they are.
The National Association for College Admission Counseling (NACAC) updated its ethics guide in fall 2025 to add a dedicated section on artificial intelligence, urging colleges to ensure their use of AI “aligns with our shared values of transparency, integrity, fairness and respect for student dignity.” The update came after the University of North Carolina at Chapel Hill faced backlash in January 2025 when its student newspaper reported the school was using AI to evaluate grammar and writing style in applicant essays. NACAC's guidance now explicitly frames the issue as bidirectional. Students should be transparent about their AI use, and institutions should be transparent about theirs.
The MLA-CCCC Joint Task Force on Writing and AI, which represents the two largest professional organizations for writing instruction, published a set of principles that remain the most thoughtful framework available. Key recommendations include treating AI as a tool that can support writing instruction when used transparently, emphasizing process documentation over product evaluation, maintaining human oversight of AI-generated feedback, and recognizing that blanket bans on AI use aren't practical or pedagogically sound. Their Student Guide to AI Literacy makes a clear distinction that is easy to remember. Written communication happens between human writers and human readers. AI can participate in the process, but the communication has to be yours.
The Common App, for its part, treats AI-generated content as fraud when it is submitted without disclosure. Their language is specific. Misrepresenting AI output as your own original work falls under the same policy as misrepresenting your identity or fabricating your achievements. That language matters because it applies to the substance of the content, not to the tools used along the way. Using AI to brainstorm or check grammar does not trigger this policy. Submitting AI-generated paragraphs as your own writing does.
The through-line across all three organizations is the same. Process and transparency. None says students should avoid AI entirely. All say the student's thinking and voice must remain at the center, and the process should be documentable.
What Parents Should Know
If you are a parent reading this, and you have made it this far, you are probably trying to figure out where to draw the line in the sand. You know that your student has access to AI. So does every other kid applying to college this year. You want your kid to be competitive. You demand that they be honest. You want to help them, not take over.
A good place to start is talking about the distinction between AI writing and AI coaching. Make sure they understand that having ChatGPT generate their essay is not just risky from a detection standpoint but fundamentally counterproductive. The college essay is an opportunity for self-discovery. The most successful essays reveal a process of self-discovery with the writer's unique, authentic voice. Using AI to write the essay means no discovery and no authentic voice. ChatGPT's version might look competent, but it's really worthless.
If your student is using any AI tools, apply the three rules above. Can they explain every sentence? Can they walk you through how the essay was written? Did the AI push them to think harder? If the answer to all three is yes, the AI use was ethical and productive.
Consider whether they have any documentation of their writing process. If they are working with a private counselor who edits their essay, there may be no record of who contributed what. If they are using an AI tool that generates text, there is no record of original thinking. Process documentation protects them, regardless of what detection tools schools use or how those tools evolve in the future.
The world of student writing and college admissions is in transition. We are still adjusting to the new technology, which seems to change every month. The families who will have the easiest time during this transition period are the ones whose students can open a document and say: here is how I wrote this. Here is every idea I explored. Here is every draft.
The process becomes the proof.
For a practical guide to getting started with the writing process before August, see my guide on how to start the Common App essay. For more resources across every stage of the essay process, visit our college essay resource hub.


