With automation becoming increasingly commonplace, tech boom wunderkinds, and everyone else, have been the debating the future of artificial intelligence. On one side Facebook founder Mark Zuckerberg, argues that more intelligent services aid humanity.
On the other side is the founder of Tesla Motors and SpaceX, Elon Musk, who has frequently warned of humanity’s doom at the hands of our own creations.
"You know all those stories where there’s the guy with the pentagram and the holy water, and he’s like, sure he can control the demon?" Musk said during a talk on artificial intelligence at MIT in 2014. "It doesn’t work out."
To help allay his fears, Musk backed a nonprofit research group called OpenAI last December. Last week, the organization launched its first program: OpenAI Gym.
The "gym" has no dumbbells or punching bags. The program is meant to serve as a standards benchmark. By providing multiple tests, the program will allow AI developers to run – or "exercise" – new systems.
"We’re releasing the public beta of OpenAI Gym, a toolkit for developing and comparing reinforcement learning algorithms," reads the group’s blogpost.
By ensuring that all developers have a set of standards to go by, OpenAI hopes to avoid the inception of a single humanity-ending algorithm.
"Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return," reads OpenAI’s mission statement.
The group has a number of financial backers, including Musk, and has an experienced team, including Ilya Sutskever and Wojciech Zaremba, researchers formerly with Google and Facebook.
There’s no guarantee, of course, that standardized artificial intelligence tests will catch on. If it does, it still may not be able to prevent the destruction of humanity.
Still, with so much at stake, it’s a step in the right direction.