AI Governance Compliance
How Taughtful Australia meets the 2026 AI governance requirements for EdTech and aligns with Victorian child welfare, family violence, and trauma-informed practice frameworks.
| Requirement | Deadline | Status | Impact on Startups | Institutional Benefit |
|---|---|---|---|---|
Transparency Statements | Dec 2025 | Past Due | Must declare AI's role in the product | Enhances public trust |
AI Impact Assessment | Continuous | Ongoing | Mandatory for each use case | Identifies bias and privacy risks |
Strategic Adoption Plan | June 2026 | Upcoming | Agencies must have a roadmap | Aligns tech with mission objectives |
Internal Case Register | Dec 2026 | Upcoming | Full inventory of tools required | Reduces "Shadow IT" and redundant apps |
Staff Training | Dec 2026 | Upcoming | Mandatory AI literacy training | Uplifts workforce capability |
Transparency Statements
Deadline: Dec 2025
AI Impact Assessment
Deadline: Continuous
Strategic Adoption Plan
Deadline: June 2026
Internal Case Register
Deadline: Dec 2026
Staff Training
Deadline: Dec 2026
Australian Government AI Frameworks
Taughtful aligns with the Australian Government's AI governance landscape, including the frameworks below. These voluntary principles and mandatory government requirements guide how we design, develop, and deploy AI in education.
Australia's 8 AI Ethics Principles
Department of Industry, Science and Resources (updated Dec 2025)
- Human, societal and environmental wellbeing — AI systems should benefit individuals, society and the environment
- Human-centred values — Respect human rights, diversity, and the autonomy of individuals
- Fairness — Inclusive, accessible, no unfair discrimination
- Privacy protection and security — Uphold privacy rights and ensure data security
- Reliability and safety — Operate reliably in accordance with intended purpose
- Transparency and explainability — People can understand when AI impacts them
- Contestability — Timely process to challenge AI use or outcomes
- Accountability — Identifiable responsibility and human oversight
Policy for the Responsible Use of AI in Government (v2.0)
digital.gov.au — Effective 15 December 2025
Mandatory requirements for Commonwealth entities: accountable officials, transparency statements, strategic AI adoption plans, internal use case registers, staff training, and AI use case impact assessments.
Source: digital.gov.au →National AI Plan & Guidance for AI Adoption
National AI Plan (Dec 2025) — Capture opportunity, spread benefits, keep Australians safe. Guidance for AI Adoption (Oct 2025) — 6 essential practices for responsible AI governance.
Source: industry.gov.au →Child Welfare & Education Frameworks
Taughtful is built on evidence-based frameworks across trauma-informed practice, child safety, family violence risk management, and inclusive care. These frameworks shape our observation language, AI-generated documents, and information sharing architecture.
Trauma-Informed Practice
All observation chips, AI prompts, and document templates use trauma-informed, strengths-based language grounded in neuroscience and developmental psychology. We frame behaviour as communication and prioritise safety, self-regulation, and relational connection.
Attachment Theory & Relational Practice
Cross-setting observations surface attachment and relational patterns across home, school, therapy, and community. AI-generated documents highlight secure-base relationships and avoid pathologising attachment-seeking behaviour.
MARAM Framework
Aligned with Victoria's Family Violence Multi-Agency Risk Assessment and Management Framework. Observation chips include MARAM-aligned risk indicators. AI prompts never minimise family violence and apply perpetrator-accountability principles.
Best Interest Case Practice Model
Case notes and documents are structured around best interest principles: child-centred, family-focused, culturally responsive, strengths-based, and evidence-informed. Cumulative harm patterns are identified across observation timelines.
Safe and Together Model
AI prompts never use “failure to protect” language. Documents partner with the non-offending parent and map perpetrator patterns of coercive control rather than blaming victim survivors.
Victorian Child Safe Standards
Taughtful complies with all 11 Child Safe Standards. Our parent-as-gatekeeper consent model, audit trails, and access controls align with standards for governance, child participation, equity, and safeguarding culture.
FVISS & CISS Information Sharing
Safety-concern observations can bypass normal consent tiers under the Child Information Sharing Scheme (CISS) and Family Violence Information Sharing Scheme (FVISS). All sharing is audit-logged with the legal basis recorded.
Mandatory Reporting
When observations indicate potential abuse or neglect, Taughtful prompts mandatory reporters to consider their obligations under the Children, Youth and Families Act 2005. Reporting status is tracked for audit compliance.
Mental Health & Suicide Prevention
Observation chips include mental health awareness indicators informed by Youth Mental Health First Aid. AI has hard safety boundaries: it will never minimise suicide risk or self-harm indicators, and always recommends professional referral.
LGBTIQ+ Inclusive Practice
Taughtful supports affirmed names and pronouns as first-class fields. All AI-generated documents use inclusive language, respect diverse family structures, and avoid heteronormative assumptions. Aligned with Rainbow Tick Standards.
Restorative Practice
Behavioural observations are framed restoratively, not punitively. Strategy chips include restorative conversations, circles, impact exploration, and repair agreements. AI documents prioritise understanding harm and making amends.
Access to Early Learning & Adult Learning
KIS evidence documents incorporate AEL vulnerability-aware language for children in out-of-home care, Aboriginal families, and asylum seekers. Taughtful's onboarding and UX apply adult learning principles: role-specific pathways, progressive disclosure, and in-context help.
How Taughtful is Secure and Ethical
We build accountability in from the start — not as an afterthought. Our AI ethics and data policy framework is overseen by dedicated governance, ensuring transparency, constraint, auditability, and human veto at every step.
Security & Privacy
• All data stored in Australian data centres (Sydney region)
• AES-256 encryption at rest, TLS 1.3 in transit
• No personal data sent to LLM providers for training
• PII stripped before AI processing (two-pass: regex + NER)
• No data sold to third parties, ever
• Complete audit trail on all data access
• Right to deletion within 90 days
Human Control & Consent
• Parent controls exactly who sees what
• AI generates drafts — professionals review and approve
• Human veto on every AI-assisted decision
• Designed for neurodivergent children and vulnerable families
• Cultural safety: 8 Ways integration, ACCO prioritisation
Transparency & Accountability
• Clear disclosure of AI's role in document generation
• AI impact assessments for each use case
• Internal use case register and governance oversight
• Jimmie Martin, AI Ethics & Data Policy — dedicated governance
• Aligned with Australia's 8 AI Ethics Principles
Compliance & Standards
• NDIS compliant
• Data stored in Australia
• Victorian Child Safe Standards compliant
• MARAM-aligned risk assessment and safety sharing
• CISS/FVISS information sharing scheme support
• Mandatory reporting awareness and audit trail
• Meets 2026 AI governance requirements for EdTech
Our Commitment
Taughtful Australia uses AI to assist caregivers, educators, and families in supporting neurodivergent children. We are fully committed to meeting every requirement of the 2026 AI governance framework — because responsible technology means better outcomes for the children and families we serve.