AI Transparency Improves, But Still Falls Short: Stanford Study Reveals

AI companies are becoming more transparent about their large language models, with a recent Stanford University study indicating significant progress in disclosure. The study, which ranks companies based on 100 transparency indicators, shows that the average transparency score has increased from 37 to 58 points in the past six months.

While all eight evaluated companies have shown improvement, there is still a considerable lack of genuine transparency. Developers are most transparent about their models’ capabilities and downstream applications, but crucial issues such as training data, model reliability, and impact remain opaque.

The study reveals a systemic lack of transparency in crucial areas, with almost all developers withholding information on copyright status of training data and the use of models in different countries and industries.

Although progress has been made, the AI community still has a long way to go in terms of transparency. The researchers suggest that if all companies were to follow the most transparent ones in each category, substantial progress could be achieved.

The increased transparency is a positive step, but how can the AI community ensure genuine and comprehensive transparency in the development and deployment of large language models? #AIEthics 🤔