OpenAI declares Superalignment grant fund to support research into evaluating superintelligent systems

OpenAI

OpenAI has announced a new grant program to assist companies working on making superintelligent systems safe, as the company believes superintelligence will be possible within the next decade.

These cutting-edge systems will “be capable of complex and creative behaviors that humans cannot fully understand,” the company claims. The alignment process, which is currently used to guarantee the safety of AI systems, makes use of reinforcement learning from human feedback (RLHF).

When it comes to handling the sophisticated use cases that superintelligent AI will enable, this may not be as effective because it depends on human supervision.

Read More: OpenAI announces Superalignment grant fund to support research into evaluating superintelligent systems

Check Out The New TalkDev Podcast. For more such updates follow us on Google News TalkDev News.