OpenAI’s former head of AI safety criticizes lack of priority on AI safety

Former head of AI alignment at OpenAI, Jan Leike, has raised concerns about the company’s approach to AI safety. Leike believes that OpenAI needs to focus more on preparing and securing the next generations of AI models. He points out that despite promises, the company has not dedicated enough computing power to the safety team.

Leike stresses the importance of AI safety, especially in the context of developing superintelligent AI. He expresses his conviction that OpenAI needs to take its responsibility towards all of humanity more seriously, highlighting the inherent dangers of creating machines smarter than humans.

The departure of Leike and renowned AI researcher Ilya Sutskever from OpenAI aligns with earlier rumors of discontent within the company, particularly regarding over-commercialization and rapid growth. This raises important questions about the balance between innovation and responsible AI development.

How can companies effectively balance commercial interests with the crucial need for AI safety? 🔍 #AISafety