“We estimate the probably of winning to be above 95%”, said OpenAI’s Dota 2 bots at the start of their second game against extremely skilled humans. I knew they’d been trained up using a sophisticated reinforcement learning technique that had instilled them with millenia worth of experience. I didn’t know they’d been versed in trash talk.
It seems they had training time to spare, because they handily won two out of three games at the “OpenAI Five Benchmark” yesterday. This was the robo-team’s first outing against non-amateur players, in preparation for the show match they’ll play at the International 2018. There, they’ll face off against the best players in the world. The bots won their first game here in 14 minutes, so I don’t fancy humanity’s chances.
Update: And that's game, with the AI victorious in two rounds out of three. However, the show's still going on Twitch with OpenAI explaining more about how they built and tested their team, and what's next for them.
Original story: Research institute OpenAI, which Elon Musk co-founded in 2015, is set to send its super-smart self-taught Dota 2 bots to The International 2018 later this month to take on a team of experienced pros. Today, those bots are playing a benchmark match against five top players, including former pros, casters and analysts. The action will stream live on Twitch from 12:30pm PT/3:30pm ET/8:30pm BST. Click here to watch.
The human team is made up of Blitz, Cap, Fogged, Merlini and Moonmeander, with commentary from Purge and ODPixel.
The bot team, called OpenAI Five, has certainly had enough practice to hold its own: it has been playing 180 years' worth of games against itself every day, picking up the intricacies of the MOBA along the way.
OpenAI's 1v1 bot beat pro player Danil "Dendi" Ishutin at last year's The International, and the institute says the learning process for OpenAI Five is much more complex, requiring 256 GPUs and 128,000 CPU cores.
If you're interested, it's worth reading OpenAI's blog post on the challenges its Five team faces, including the complex rules of Dota 2, and the fact that, at any given time, a hero could have more than 100,000 possible actions open to them.
Valve have announced plans to launch Artifact, their digital card-battling adaptation of wizard management simulator Dota 2, on November 28th. That’s the plan. That’s what they say now. Valve’s first big game since Dota 2 in 2013, Artifact turns the MOBA into a card game where players build decks to make wizards fight across three ‘lanes’ of the table and murder the other wizards’ base. Unlike Dota 2, Artifact won’t be free-to-play, costing $20 to buy in with starter decks – and more for more cards.
We all know that playing online with other random humans can be a pain, and apparently Ubisoft got plenty sick of watching players spew racist language in Rainbow Six: Siege. The developer instituted a system that automatically bans players who enter racial slurs into the game chat, which has resulted in plenty of outrage.
If Ubisoft can make life tough for these people, what are the other major multiplayer games doing to combat racism in their games? We checked in with Blizzard, Valve, and other developers to see how they deal with racial slurs, and what they do to the players who use them.
How does Riot police harassment?
Of all the games on this list, League of Legends might have the most extensive code of conduct, known formally as the “Summoner’s Code.” League’s “Instant Feedback System” has seen some reforms, but it basically scours through the game’s chat logs after someone submits a player report, then doles out a verdict in 15 minutes or less, or your pizza is free. The first offense gets you a 10-game chat restriction, then a 25-game chat restriction, then a two week ban, and finally a permanent ban. Offending players even get a pleasant little in-game message about why they’re getting the hammer.
How do well-behaved players fight back?
Players can submit a report against someone at the end of a game, and the Instant Feedback System should get a verdict within 15 minutes. Sometimes, but not always, players will be notified if their report resulted in the punishment of another player, but Riot says that even if you don't get a message, that doesn’t necessarily mean the other player got off scot free.
“We want a future where League is wholly free of slurs and hate speech, but penalties alone won’t get us there," said Riot senior technical designer Kimberly Voll in a statement to PC Gamer. "In recent years, we’ve been focusing more on the establishment of norms. We believe if there aren’t clear, understood, and shared rules on what’s OK in gaming, like there are in sports, then we’ll just be enforcing the same nasty things forever. Like Jeff Kaplan mentioned on behalf of Overwatch last year, enforcement alone stretches budgets. We agree, and believe it limits our imagination and audience as well.”
How does Epic Games police harassment?
It’s unclear how extensive punishments for abusive players are in Fortnite. Search online and you’ll find far more forum posts about temporary or permanent bans for players who break the rules by teaming up in solo mode than for harassment. There’s no text chat, just a voice chat system that’s push-to-talk by default for communication with your squadmates. Epic’s primary focus seems to be on cheaters, going so far as to file a lawsuit against two more prominent players. Fortnite’s code of conduct page does warn players to “be graceful in victory and defeat."
"Discriminatory language, hate speech, threats, spam, and other forms of harassment or illegal behavior will not be tolerated,” it reads. PC Gamer has reached out to Epic for a more thorough explanation of how their system works, but has yet to receive a reply.
How do well-behaved players fight back?
Fortnite has a typical system wherein you can report the player who killed you in the post-death menu. If you need to report another player or your own teammates, the report function is squirreled away in the “feedback” menu option, and you need to be able to supply that player’s username. Epic has also invited players to use their support center for bigger issues.
How does Valve police harassment?
Valve operates on a typical system where more reports get you banned for longer periods of time. The lower rung of bans can be as short as 10 minutes to an hour. You’ll get a day-long ban if things get a bit more serious. And if you’re a real jerk, you’ll get a week, then a month or two, and finally a six month or permanent ban. Valve deliberately keeps this process vague, but it does warn abusive players ahead of time that continued bad behavior will result in longer bans.
How do well-behaved players fight back?
Valve has a pretty typical report system in place, but it’s basically only available at the end of a match. Players can pick from three categories (communication abuse, intentional ability abuse, and intentional feeding) and can leave a brief comment. You can leave up to three reports per week, and you’ll be notified if any action is taken against another player you reported.
How does PUBG Corporation police harassment?
If a PUBG player is caught harassing another player with racist or sexist language, they will first receive a three-day ban. A second incident will net a full week, and a third incident will net a full month. Any repeat offenses beyond that will earn a player a permanent ban. You can look at the comprehensive chart for a better idea of how other issues are tackled.
“It is unacceptable to disrespect or use offensive words towards others based on their race, gender, nationality, etc,” the code of conduct reads.
How do well-behaved players fight back?
There’s currently no way for someone to report another player who didn’t kill them, which makes reporting racist behavior a pain. PUBG representatives have said that players must submit a report on the PUBG forums that includes the reporting player’s username, the name of the player you’re reporting, the time and date of the incident, and a description of the incident. I wouldn’t want to be the guy sifting through all that footage.
PUBG allows you to directly report auser after they’ve killed you, although the closest category available for racial discrimination would be “improper nickname.” This feedback system is clearly not designed for combating harassment.
How does Valve police harassment?
Although CS:GO uses one of the most effective automated systems for shutting down cheaters, VACnet, it does not currently automatically ban or silence players who use racist language. VACnet and its accompanying "Overwatch" system are primarily focused on catching hackers and griefers. The Overwatch system recruits experienced players with good records, then gives them the tools to review footage of reported matches, and it’s up to them to give a collective verdict. If a player is caught being awful, they receive either a “minorly disruptive” or “majorly disruptive” designation. The first results in a ban of “at least 30 days,” but a second offense gets a lifetime ban.
A “majorly disruptive” designation automatically gets you a permanent ban, but Valve’s description only mentions cheating, not abusive behavior. There’s no mention of racism, sexism, or general verbal abuse in Valve’s descriptions of Overwatch or VACnet, so it’s unclear if the company has any major initiative against harassment other than the mute and report buttons.
We reached out to Valve for clarification on how its process works, but did not receive a reply by publishing time.
How do well-behaved players fight back?
Players can select another player during a match, which opens up an option to report or commend them. Abusive text and voice chat sit comfortably at the top of the list of options. You can also mute the player by checking the “block communications” box.
How does Psyonix police harassment?
Rocket League might be the closest to Rainbow Six: Siege in terms of automating bans for racial slurs. Back in 2017, Psyonix instituted a secret list of 20 words and variants that can trigger bans. Psyonix says each word has a certain threshold, and once that’s met, multiplayer bans will start at 24 hours, then 72 hours, a week, and finally a permanent ban.
Psyonix also has a chat ban system in place that, well, bans jerks from using the text chat window. It’s a little more lenient than the general language ban system, but players have to report the abuse (rather than the game auto-scanning), then the system scans the game that was just played for abusive language. If a reported player is found guilty, they're banned from chatting for 24 hours to one month. It’s not instant like Rainbow Six Siege’s. If a player insists on using abusive language after their initial chat ban is up, they “may” get a permanent overall game ban. Players who get chat bans are notified whenever it happens.
How do well-behaved players fight back?
Rocket League's reporting mechanism is straightforward. You just click on the offending player’s profile, select “mute/report,” and then select from the available categories. Verbal harassment sits at the top. If Psyonix takes action against a player, they’re notified later in the main menu.
How does Blizzard police harassment?
Blizzard is one of the most outspoken studios on toxicity and related issues. However, that outspokenness more often translates to support tools for well-behaved players than any sort of explanation for how bad players are punished.
Blizzard used to punish repeatedly abusive chat users in Overwatch by simply muting them, but still allowing them to play. That’s now changed, with those players receiving lengthier and lengthier bans for each successive offense. Blizzard is unclear on the rubric it uses, and unclear on how that’s balanced between automated systems and real humans banging the gavel, but it does say that a player with enough reports and punishments on their record will receive a permanent ban. Negative players are given warnings prior to an actual punishment, something director Jeff Kaplan has said has helped stop players from causing further trouble.
How do well-behaved players fight back?
Since launch, Blizzard has allowed PC players to report individual players via the report function, and added the function to consoles in mid-2017. Besides picking a category of bad behavior, players also have the ability to include a brief description of their experience with each individual report. Blizzard has also stated that it searches out recordings of toxic behavior on YouTube, Twitch, or other sites to find negative players and address them.
After/during each match, Overwatch players can also select up to two players to block with the “avoid as teammate” option. Players can deselect or replace one person with another, but Blizzard has stated that that number may rise if the program doesn’t cause any issues. There’s also the new “find team” function that lets players avoid the pitfalls of solo queuing into a match with nothing but Hanzo mains.
PC Gamer reached out to Blizzard to clarify how these punishments are determined, but has yet to receive a response.
An awful-looking game called Climber recently vanished from Steam after several Dota 2 players reported they'd been scammed by Climber users pawning lookalike items.
The scam revolved around a Dota 2 item called the Dragonclaw Hook. The genuine Dragonclaw Hook is an immortal-rarity item for the hero Pudge. It was briefly available in early 2013 and can no longer be obtained outside of the Steam Community Market, where it can fetch upwards of $800.
However, the Dragonclaw Hook these players were offered was a carefully crafted fake. According to a screenshot posted on Reddit by angelof1991, Climber's counterfeit hook used the same image and description as the genuine article:
According to a screenshot from mage203, Climber even used Dota 2's logo in the Steam Community Market:
Climber's Steam Database entry corroborates these screenshots. The game's recent activity shows it was scrubbed from the storefront less than 24 hours after the Dota 2 logo was added to its page, and the fake Dragonclaw Hook is buried in its item definitions.
A cached version of Climber's Steam page shows it launched in Early Access on May 18, 2018 for $1. Its Steam description calls it "a game in which you need to go as far as possible," and it looks like a Max Dirt Bike-style game made in Microsoft Paint.
Climber's developer and publisher, respectively KIRILL_KILLER34 and The Team A, have one other game on Steam. It's called Space Vomit, and it looks just as atrocious as Climber. According to its Steam reviews, it's a shoddy $1 game that instantly gets you thousands of Steam achievements. Interestingly, Space Vomit also uses the same Early Access blurb as Climber, with a scant few words changed. Here's a side-by-side comparison:
Climber and all of its items have vanished from Steam, but at the time of writing, Space Vomit is still available, though it isn't on the Community Market. It's unclear whether the scammers merely skipped town after getting caught or if Valve intervened. Earlier today, Valve removed a game from Steam after its developer, Okalo Union, was accused of creating fake Team Fortress 2 items. However, like KIRILL_KILLER34 and The Team A, Okalo Union's Steam account is still live at the time of writing.
Notably, this whole mess comes weeks after Valve announced that it will no longer police what's on Steam unless it's illegal or "straight-up trolling." And with no moderation apparently in place to weed out games like Climbers, it is likely that more scams like this will crop up in the foreseeable future, so check your trades carefully.