I signed the open letter titled “Pausing AI Experiments That Pose a Significant Risk as did AI experts, academics, researchers, and innovators around the globe, including Elon Musk and Andrew Yang.
The open letter urges leaders in the AI research community to:
- temporarily halt the development of systems more powerful than GPT-3
- consider the potential ethical implications of their work
- temporary halt of AI development more powerful than GPT-4
as a necessary step in ensuring that AI technology is developed responsibly and ethically. It also poses a series of thought-provoking questions that highlight the risks of unchecked AI development, such as:
- the flooding of information channels with propaganda and untruth
- the automation of all jobs
- the development of nonhuman minds that could outsmart and replace humans, and
- the loss of control over our civilization
It’s all good to say that we must come together to create and encourage use of responsible AI development practices and encourage technology is ethically developed, regulated, and governed, but…
- How do we get the world to agree?
- How do we get the AI baby back in the womb?
- How do we gain gain worldwide consensus?
- Who will police this?
- How will warn people of the dangers, and that for 50% it will be a case of innovate or die?
There’s never been a better time to keep your eye on the digital ball folks. We’re in for a bumpy ride!
WORK WITH SUE