The Finals version of OpenAI Five has a 99.9% winrate versus the TI version. In total, the current version of OpenAI Five has consumed 800 petaflop/s-days and experienced about 45,000 years of Dota self-play over 10 realtime months (up from about 10,000 years over 1.5 realtime months as of The International), for an average of 250 years of simulated experience per day. A steep slope after any of these indicates OpenAI Five adapting to that change depending on the change the evaluation may be unfair to the versions before. This graph evaluates all bots on the final game rules (1 courier, patch 7.21, etc)-even those trained on older ones. The graph is roughly linear, meaning that OpenAI Five benefited continually from additional compute (note this is a log-log plot, since the x-axis is logarithm of compute and TrueSkill corresponds roughly to exponential progress). OpenAI Five's TrueSkill as we've applied additional training compute, with lines demarcating major system changes (moving to single courier increasing LSTM size to 4096 units upgrading to patch versions 7.20 and 7.21 and starting to learn buyback). So we increased the scale of compute in the only way available to us: training for longer. But after The International, we'd already dedicated the vast majority of our project's compute to training a single OpenAI Five model. In many previous phases of the project, we'd drive further progress by increasing our training scale. OpenAI Five's victories on Saturday, as compared to its losses at The International 2018, are due to a major change: 8x more training compute. This isn't the end of our Dota work-we think that Dota is a much more intrinsically interesting and difficult (and now well-understood!) environment for RL development than the standard ones used today. We are retiring OpenAI Five as a competitor today, but progress made and technology developed will continue to drive our future work. But we think decreasing the amount of experience is a next challenge for RL. This limitation may not be as bad as sounds-for example, we used Rapid to control a robotic hand to dexterously reorient a block, trained entirely in simulation and executed on a physical robot. The surprising power of today's RL algorithms comes at the cost of massive amounts of experience, which can be impractical outside of a game or simulated environment. The results exceeded our wildest expectations, and we produced a world-class Dota bot without hitting any fundamental performance limits. To build OpenAI Five, we created a system called Rapid which let us run PPO at previously unprecedented scale. It uses the same general-purpose learning code whether those numbers represent the state of a Dota game (about 20,000 numbers) or robotic hand (about 200). OpenAI Five sees the world as a bunch of numbers that it must decipher.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |