AI Disclosure Policy

Generative AI & Content Integrity Policy

1. Philosophy and Purpose

This site operates in the contested ethical space between substantive co-authorship and necessary assistive technology. The policy defined below is built on the understanding that Generative AI (LLMs) functions as a sophisticated instrument for "distant writing"—a distributed activity where the human author retains epistemic agency (intent and knowledge) while delegating specific generative labor (mechanics and structure) to non-agential systems.

For neurodivergent writers, AI tools often serve not as shortcuts for thought, but as bridges between internal cognition and external expression. This policy aims to establish a framework that allows for accessibility accommodations without compromising intellectual honesty or human accountability.

2. Core Ethical Principles

All content published on this site adheres to five non-negotiable principles:

  1. Human Accountability: Authorship requires the capacity to take responsibility. AI cannot be held accountable; therefore, the human author assumes full liability for every word, fact, and argument published.
  2. Intellectual Origin: The core ideas, creative vision, and argumentative structure must originate with the human author. AI is used to express existing ideas, not to generate ideas the author does not possess.
  3. Verification: AI output is treated as "unvetted source material." No AI-generated statement is published without independent human verification of facts and logic.
  4. Epistemic Agency: The author uses AI as a tool to overcome barriers related to executive function, working memory, or tone calibration, but retains full control over the final output.
  5. Transparency: We acknowledge the use of tools where they impact the nature of the work, distinguishing between AI-assisted editing and AI-generated content.

3. Permissible Use as Assistive Technology

Drawing on research regarding neurodivergent writing challenges (specifically regarding ADHD and Autism), this blog classifies the following AI applications as legitimate Accessibility Accommodations:

  • Tone Calibration: Using AI to adjust the "register" of a draft (e.g., ensuring a post isn't too formal or too direct) to bridge the gap between intent and audience expectation.
  • Executive Function Support: Using AI to break down complex topics into outlines or to organize nonlinear insights into a coherent structure.
  • Working Memory Offloading: Using AI to hold complex ideas in a "buffer" while the author focuses on specific sections, preventing the loss of coherence common in ADHD-affected cognitive systems.
  • "Scaffolding" & Syntax: Using AI as a "virtual pool wall to push against" to overcome perfectionism-induced paralysis or to translate internal concepts into conventional prose.

4. Prohibited Uses

To maintain the distinction between tool use and co-creation, the following practices are strictly prohibited:

  • Idea Generation: Relying on AI to determine what to write about or to provide the primary arguments for a piece.
  • Unvetted Drafting: Copy-pasting AI output directly into a post without significant review, revision, and voice alignment.
  • Fact-Reliance: Trusting AI for statistics, citations, or historical facts without checking primary sources (treating AI as a "well-read but hallucination-prone colleague").

5. The Authorship Standard

In alignment with the Committee on Publication Ethics (COPE) and ICMJE standards, AI is never listed as an author. Qualification for authorship requires:

  1. Substantial contribution to the conception or design of the work.
  2. Drafting the work or revising it critically for intellectual content.
  3. Final approval of the version to be published.
  4. Accountability for all aspects of the work.

Since AI cannot fulfill criterion #4, it remains a tool, not a collaborator.

6. Disclosure & Transparency

We recognize that mandatory disclosure of AI tools used for accessibility can inadvertently force the disclosure of neurodivergence. Therefore, we adopt a context-specific disclosure model similar to Amazon KDP’s distinction between "AI-Generated" and "AI-Assisted":

  • AI-Assisted (No specific tag required): Content where the ideas, research, and draft originated with the human, and AI was used for editing, refining, brainstorming, or error-checking. This is treated similarly to using a spell-checker or a human editor.
  • AI-Generated (Explicit disclosure required): In the rare event that a specific component (such as a code snippet, a specific image, or a summary of external text) is generated by AI with minimal human alteration, this will be explicitly cited in the text or a footnote.

7. A Note on Plagiarism and Originality

While AI output is not "copyright infringement" in the traditional sense, using AI-generated ideas without attribution is a form of plagiarism. To avoid this:

  • We do not claim novel insights that were suggested solely by the LLM.
  • We verify that arguments are defensible and genuinely held by the human author.
  • We treat the AI model as a translator of our own thoughts, ensuring the final work represents the author's voice, not the statistical average of the training data.

> "AI didn't give me ideas I didn't already have. It helped me communicate the ones I did... That's not a shortcut. That's accessibility." — Pete Yagmin