The Center for Democracy & Technology calls for stronger system-level guidance in NIST’s AI documentation standards
The Center for Democracy & Technology (CDT) has submitted detailed feedback on the U.S. National Institute of Standards and Technology’s (NIST) Proposed Zero Draft for a Standard on Documentation of AI Datasets and Models. While welcoming the effort to enhance transparency and accountability in AI, CDT urged NIST to include clearer guidance on system-level documentation, implementation practices, and data governance.

The Center for Democracy & Technology (CDT) has responded to NIST’s Extended Outline: Proposed Zero Draft for a Standard on Documentation of AI Datasets and AI Models, a framework that aims to shape voluntary standards for how organisations record and communicate key information about AI systems. The draft is part of the US government’s broader effort to encourage safe and transparent AI development under the AI Safety Institute and the NIST AI Risk Management Framework.
In its comments, CDT commended NIST for addressing context-aware documentation – that is, adapting documentation practices to reflect organisational goals, stakeholder needs, and the different stages of AI system development. CDT said that clearly defining who uses the documentation and why helps ensure that records are meaningful rather than purely procedural. The group also welcomed the inclusion of interoperability, AI lifecycle management, and transparency tradeoffs as guiding principles in the draft.
Call for system-level documentation
However, CDT criticised the decision to exclude system-level documentation, the broader record of how datasets and models interact within a full AI system, including user interfaces and third-party components—from the draft’s scope. The organisation argued that many of AI’s most significant risks arise from the way individual components interact, not from datasets or models alone.
Our analysis identified multiple research frameworks for system documentation, showing that it is an emerging but essential field, CDT wrote. Rather than excluding it, NIST should acknowledge its importance and clarify that system-level documentation will be a focus of future work.
More practical guidance needed
CDT also encouraged NIST to expand its recommendations on practical implementation. The group said organisations need clearer advice on how to prioritise what to document, how to tailor templates to their size and resources, and how frequently documentation should be updated. It also suggested that NIST provide frameworks to help teams decide which parts of data pipelines – such as preprocessing, filtering, or curation – most influence system performance and should therefore be documented in greater detail.
CDT further recommended that future drafts distinguish between datasets used for pretraining, fine-tuning, and evaluation, noting that each has different implications for fairness, bias, and accountability. Similarly, the group called for greater clarity in model documentation templates – particularly around data provenance, update processes, and governance.
Clarity on robustness, evaluation, and governance
The submission welcomed NIST’s inclusion of robustness as a documentation requirement but advised that its meaning be clarified. In AI research, robustness can refer to either resistance to adversarial manipulation or consistency of performance across test conditions. CDT said both dimensions should be addressed, alongside disclosures about whether training and test datasets overlap, which can distort evaluation results.
The organisation also questioned why governance appeared in the draft as a minimal section while being labelled “out of scope.” CDT argued that governance decisions are fundamental to model behaviour, influencing design, deployment, and risk management, and should be more fully integrated into the standard.
Why this matters
For civil society groups like CDT, AI documentation is not a bureaucratic exercise—it is a safeguard for accountability. Transparent records of how datasets are collected, models are trained, and systems are deployed help external researchers, regulators, and the public scrutinise AI impacts, from bias to misinformation.
Documentation is one of the most powerful tools for aligning AI development with democratic values, said Miranda Bogen, Director of CDT’s AI Governance Lab. But to be effective, it must cover the entire system, not just its parts.