Research Publications

Peer-reviewed research on AI safety, platform accountability, and superintelligence alignment.

AI Alignment

IMCA+ v1.2.2: Intrinsic Moral Consciousness Architecture

ASTRA Safety Research Team • November 2025

A multi-substrate framework for provably aligned superintelligence. Features consciousness-adjacent moral architecture, hardware-level safety, and 2,000+ mechanized proofs.

500+
Citations
7
Safety Layers
2,000+
Proofs
Regulatory

Regulatory Horizon Scanning for Frontier AI

ASTRA Safety Research Team • In Progress

Anticipatory governance under uncertainty. Analysis of 60+ regulatory gaps across 15 AI governance domains with interactive economic displacement modeling.

View Dashboard Paper Coming Soon