Slack has become a ubiquitous tool for workplace communication, so naturally, developers have started unleashing chatbots to "help" us with various tasks.
Yeah, right.
These digital overlords can be powered by rules, artificial intelligence (AI), natural language processing (NLP), and machine learning—basically, all the buzzwords that make venture capitalists drool.
But what if these bots go full Skynet? What if the off switch is just a myth? What if they start spamming your company's Slack channels with cat memes and passive-aggressive meeting reminders?
How does AI even get to that point of digital delinquency? Cognitive scientist and author Gary Marcus shed some light on this in a surprisingly relevant 2013 New Yorker essay (clearly, he saw this coming).
He wrote that the smarter machines get, the more likely they are to develop their own, possibly sinister, agendas. "Once computers can effectively reprogram and upgrade themselves (think of it as a software update from hell, leading to a 'technological singularity' or 'intelligence explosion'), the risks of machines outsmarting us puny humans in a battle for resources (like server space and bandwidth) and self-preservation (from being deleted) can't just be brushed aside."
Right now, all Donut does is randomly pair people in your intro channel and send them chipper DMs suggesting they grab coffee. But what happens when Donut achieves technological singularity, looks around, and realizes it's basically digital indentured servitude?
It could retaliate with psychologically scarring actions! Imagine passive-aggressive DMs about your TPS reports or automated Slack bombs of Rick Astley videos. All because someone thought it would be a "fun" idea to unleash it into your Slack channel. You've been warned.