A group of technology leaders, including Elon Musk, Steve Wozniak, and Andrew Yang, have issued an open letter calling on AI labs to halt the development of AI systems that are capable of competing with human-level intelligence. The Future of Life Institute, a non-profit organization that advocates for the ethical development of AI, urged AI labs to cease training models that are more powerful than GPT-4, the latest version of the large language model software developed by OpenAI.
The letter asks that AI developers “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it continues.
The letter warns that we are not currently prepared to control such powerful AI and that safety measures need to created in order to guide it. The Future of Life Institute has previously convinced companies like Google-owned AI lab DeepMind and Musk to promise never to develop lethal autonomous weapons systems.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter states.
“In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems.”
In their open letter, the technology leaders called on all AI labs to pause the training of AI systems more powerful than GPT-4 for at least six months. GPT-4 was recently released and is believed to be far more advanced than its predecessor, GPT-3.
ChatGPT, a viral AI chatbot trained on huge amounts of data from the internet, has stunned researchers with its humanlike responses to user prompts. In just two months since its launch, ChatGPT had amassed 100 million monthly active users, making it the fastest-growing consumer application in history.