As AI systems become more powerful, ensuring they remain aligned with human values is one of the most important challenges of our time. AI Safety & Human Alignment on GCN connects researchers, ethicists, policy experts, educators, and developers to advance safety research, build evaluation tools, develop AI literacy curricula, and ensure diverse global perspectives shape the future of AI governance.
Ensuring everyone can understand, evaluate, and shape AI systems.
A free, multilingual curriculum for understanding AI capabilities, limitations, and implications.
Developing frameworks for democratic oversight of AI systems.
A living database of AI regulations, policies, and governance frameworks worldwide.
These organizations are working on this challenge. GCN connects your skills to their needs.