Ai-governance

  • Published on
    Stanford's Trustworthy AI research has demonstrated that model-level guardrails can be materially weakened under targeted fine-tuning and adversarial pressure. In controlled evaluations summarized by the AIUC-1 Consortium briefing, (developed with CISOs from Confluent, Elastic, UiPath, and Deutsche Börse alongside researchers from MIT Sloan, Scale AI, and Databricks), refusal behaviors were significantly degraded once safety patterns were shifted.