Deephaven is a query engine that excels at working with real-time data. Data scientists and developers use Deephaven to analyze capital markets, blockchains, cryptocurrency, gaming, sports, and e-commerce. Why not use it for addressing ethical issues and improving an organization's climate as well?
According to the MIT Sloan Management Review, toxic work culture is the biggest reason why people quit their jobs. Their research estimates it’s 10 times more important than salary.
Modern machine learning algorithms make recognizing toxic content in business messaging tools doable. Deephaven's real-time capabilities make it easy.
Today, we'll demonstrate how to create a working prototype of a solution that checks if a new message posted to a Slack channel reads as toxic. If so, a bot sends a warning message to the channel.
The process is simple and requires only 3 steps:
- Receive and store real-time Slack chat messages in a Deephaven table.
- Calculate the probability of toxicity for each message.
- Send a notification if a message is classified as toxic.
If you just want to look at some code, this GitHub repository has everything. For further details, keep reading!
Pull live data
To get messages from Slack, we'll use Socket Mode. To set up Socket Mode, we need to create an app and generate an app-level token.
After that we are ready to receive a private WebSocket URL:
Let's connect to it! For our example, we want the websocket to deliver events only about new messages in a Slack channel:
DynamicTableWriter
Deephaven's DynamicTableWriter can help us create a live table to store incoming messages and their integer representations that will be used as features for our ML model:
Click to see the code!

Predicting
To recognize toxic patterns in incoming Slack messages, we'll use a basic LSTM model trained on a Kaggle dataset:
Click to see the code!
Here is the table with our live predictions:

Now let's use the Slack Web Client to send back a message to the channel containing the result of the predictions. An alert will trigger if the probability of toxic content is greater than a threshold for at least one of the toxicity types:
Click to see the code!
Let's test our bot:

This starter program just scratches the surface of the Artificial Intelligence (AI) integration into the workplace. But we hope it'll inspire you to use Deephaven to solve real-life problems!
Talk to us
If you have any questions, comments, concerns, you can reach out to us on Slack - no toxicity welcome, of course. We'd love to hear from you!
