top of page
Image by Aerps.com

TABA, AI & SBIR/STTR
What Applicants should know

TIV’s policy and approach towards AI use

As a firm with more than a decade of expertise in AI Governance and Policy matters, TIV takes the ethical and practical use of AI extremely seriously for our clients, with our own internal perspectives grounded in best practice. We view artificial intelligence as a productivity and efficiency tool, not as a substitute for expert judgment, human ingenuity, and originality. Our approach to AI is one of human-led symbiosis, with such tools meriting use to support research, analysis, and routine tasks that improve efficiency and cost-effectiveness. These applications help streamline workflows, reduce administrative burden, and allow our experts to focus their time on higher-value strategic and technical work,while remaining aligned with federal research integrity expectations. In this context, AI functions as an assistive tool,  an author or decision-maker.

​

Accordingly, in our role as a leading TABA provider, TIV is committed to the success of our SBIR/STTR clients, while remaining compliant with SBIR/STTR Agency guidelines on AI usage. Namely, with the emergence of AI and Large Language Models (LLMs), there has been an increased reliance by some applicants on such tools to perform most of their proposal writing. This increased reliance is often driven by compressed timelines, heightened competition, and the growing complexity of federal solicitations, which can make AI tools appear attractive as a shortcut to drafting or refining proposal content. In practice, however, applicants may unintentionally move from limited assistive use - such as brainstorming or language polishing - into a deeper dependence, and reliance, on AI-generated content, that are not only likely to be incongruent with the core ethos of the content, but also often found to be internally inconsistent.

​

From an agency perspective, this shift raises material concerns, as generative AI systems do not possess contextual awareness of solicitation intent, programmatic nuance, or agency-specific evaluation criteria, and they may introduce inaccuracies, misaligned emphasis, or fabricated references that are difficult to detect during internal review. For instance, AI do not generally consider the “Agency Requirements” and “Solicitation Guidelines”, which often govern the tone and content of a proposal. As a result, agencies have begun to clarify limits on acceptable AI use and to emphasize that responsibility for originality, accuracy, and compliance rests entirely with the applicant, regardless of the tools used during proposal preparation.

​

In sum: although LLMs certainly have utility, they should not be wholly relied on for proposal writing, specifically as it undermines the originality and novelty of the ideas that applicants are trying to convey.

 

As such, applicants should be aware that non-compliant use of AI tools may place proposals at risk of administrative rejection or cancellation during post-award findings and due diligence. 

Abstract Digital Sphere

SBIR/STTR Agency Policies Related to AI

In fact, many SBIR/STTR agencies have already explicitly indicated that the use of LLMs may not be used on SBIR/STTR applications. For instance, SBIR FAST agency, Wisconsin CTC, has highlighted the risks of hallucinations, fabricated citations, and technical inaccuracies when LLMs are used improperly in SBIR/STTR submissions, and explicitly warn prospective applicants, and micro-grantees, from improper use of AI.


https://wisconsinctc.org/2023/05/16/can-chatgpt-write-your-sbir-sttr-proposal/

 

Furthermore, multiple federal agencies participating in the SBIR/STTR programs have issued explicit guidance limiting or prohibiting the use of generative AI in proposal preparation - particularly where such tools replace original technical writing or introduce unverifiable content.

 

Relevant agency policies and guidance include:

​

​​

In addition to agency-specific proposal rules, broader federal AI governance frameworks increasingly inform expectations around disclosure, accountability, and risk mitigation in federally funded research.

​

These include, but are not limited to:

​

 

While specific requirements vary by agency, the overarching message is clear: applicants bear full responsibility for the accuracy, originality, and compliance of proposal content, regardless of whether AI tools are, or are not, used during preparation.

Futuristic Tech Cube

Why Applicants Must Exercise Caution When Using AI

The use of AI in proposal development introduces unique risks that extend beyond traditional writing or editing tools, inclusive and exclusive of SBIR/STTR.  AI-generated text can obscure the underlying rationale and decision-making process behind technical claims. When proposal content is not rooted in the applicant’s own reasoning, it becomes harder to defend methodological choices during peer review, Q&A, or post-award technical monitoring. This disconnect can weaken reviewer confidence in the team’s technical mastery and execution readiness, as LLMs  do not possess situational awareness of agency priorities, program intent, or solicitation-specific constraints, they may produce content that appears well-formed but is misaligned with evaluation criteria or compliance requirements. This can lead to subtle but consequential deviations from solicitation intent that are difficult to detect during review.

​

There are also longer-term considerations related to accountability. Proposals form the basis for contractual obligations, milestones, and audit trails. If AI-generated content introduces ambiguity in scope, feasibility, or ownership of ideas, those ambiguities can persist into the award phase, creating downstream risks during negotiations, reporting, or intellectual property review. Moreover, many applicants fail to align the technical considerations of AI-suggested content, with their own capabilities, often leading to awkward instances wherein the client struggles to explain how he/she will accomplish aspects such as “Specific Aims”, or “Technical Objectives”, etc.

 

Key concerns agencies have flagged include:

​

  • Unclear authorship and ownership of technical ideas

  • Inability to trace factual or methodological claims

  • Introduction of fabricated references or unsupported assertions

  • IP Ethical Concerns and Risks; Copyright issues.

  • Falsification of Capabilities of the Applicant and Understanding of Solicitation Guidelines, and Agencies

Overview of Federal Guidance, University Recommendations, and General Best Practices

Capitol Building

The following links provide some insights as to current, publicly available statements and policy documents as of 2026. This is provided so applicants can understand each agency’s expectations, limitations, and compliance requirements. In many cases, agencies frame AI guidance within broader objectives related to originality, fairness, confidentiality, and research integrity. Finally, even if an explicit ban of AI content is not expressed, TIV strongly urges that SBIR/STTR candidates exercise caution upon the use, and reliance, of AI-generated content.

Agency Notices & Other Information

NIH – National Institutes of Health

NIH policy notice: Supporting Fairness and Originality in NIH Research Applications (NOT-OD-25-132) — This official NIH notice clarifies that applications “substantially developed by AI” or containing sections substantially developed by AI are not considered original and will not be accepted; the policy also introduces limits on the number of applications any one Principal Investigator can submit per year, effective for grant submissions on or after September 25, 2025. NIH AI Policy & Application Limits (NOT‑OD‑25‑132)

NIH Extramural Nexus explanation: This detailed NIH blog post describing how this policy is intended to support originality, fairness, and creativity in NIH applications while recognizing limited appropriate AI use. NIH “Apply Responsibly” News & AI Guidance

University of Utah summary on NIH AI guidance: This provides a helpful overview, from a University’s perspective, summarizing the NIH position on AI tools and why applications developed with significant AI assistance may be considered non-original. NIH AI Limits Overview (University of Utah)

NSF – National Science Foundation

 Official NSF notice: This policy clarifies that proposers are responsible for the accuracy and authenticity of any proposal content developed with the assistance of generative AI and encourages disclosure of AI use in project descriptions; it also prohibits reviewers from uploading any confidential information (such as proposal text or review records) to publicly-accessible AI tools. NSF Notice on Generative AI in the Proposal Process

 Duke University summary: Another institutional interpretation of the NSF notice from the University perspective, which may helpful for understanding how researchers and sponsors might approach AI when developing proposals. NSF AI Guidance Summary (Duke University)

NASA – National Aeronautics and Space Administration

NASA SMD (Science Mission Directorate): While NASA does not currently prohibit the use of generative AI tools in proposal preparation, NASA expects proposers to take responsibility for the accuracy and authenticity of content not developed by the proposal team, including proper acknowledgment where non-teamwork or assistive tools are used. NASA SMD AI FAQ on Proposal Use and Authorship

NEH – National Endowment for the Humanities

NEH Policy on the Use of Artificial Intelligence for NEH Grant Proposals — NEH explicitly permits applicants to use AI tools but requires applicants to acknowledge any AI-generated text by footnoting or marginal notation; failure to comply may render an application ineligible. NEH Policy on AI Use in Grant Proposals (PDF)

Department-Level and Broader AI Governance Guidance

While not specific to grant proposal rules, the following federal AI guidance frameworks and resources increasingly influence how agencies craft policies or expectations around responsible AI use:

NIH Artificial Intelligence Policy Framework - NIH Office of Science Policy outlines broader considerations for the responsible development and use of AI in research contexts. (Office of Science Policy)

Federal AI risk and governance frameworks - Agencies such as OSTP and NIST continue to release guidance on AI risks, accountability, and transparency that, while not direct grant proposal rules, inform agency expectations for responsible AI use across federally funded activities.

Disclaimer:  While these resources found on this webpage are not exhaustive, they are intended to provide an intuition to prospective applicants to SBIR/STTR agencies, and other solicitations, so as to remain compliant with best practices, guidelines, and ethical considerations. TIV strongly encourages that you verify this information based on information, materials, and resources, provided by the aforementioned agencies et al., for assurance and to remain up-to-date. 

bottom of page