Threat Detection
This notebook shows how to use Pinecone's similarity search as a service to build an application for detecting rare events. Such application is common in cyber-security and fraud detection domains wherein only a tiny fraction of the events are malicious.
Here we will build a network intrusion detector. Network intrusion detection systems monitor incoming and outgoing network traffic flow, raising alarms whenever a threat is detected. Here we use a deep-learning model and similarity search in detecting and classifying network intrusion traffic.
We will start by indexing a set of labeled traffic events in the form of vector embeddings. Each event is either benign or malicious. The vector embeddings are rich, mathematical representations of the network traffic events. It is making it possible to determine how similar the network events are to one another using similarity-search algorithms built into Pinecone. Here we will transform network traffic events into vectors using a deep learning model from recent academic work.
We will then take some new (unseen) network events and search through the index to find the most similar matches, along with their labels. In such a way, we will propagate the matched labels to classify the unseen events as benign or malicious. Mind that the intrusion detection task is a challenging classification task because malicious events are sporadic. The similarity search service helps us sift the most relevant historical labeled events. That way, we identify these rare events while keeping a low rate of false alarms.
Setting up Pinecone
We will first install and initialize Pinecone. You can get your API Key here. You can find your environment in the Pinecone console under API Keys.
!pip install -qU pinecone-client
import pinecone
import os
# Load Pinecone API key
api_key = os.getenv('PINECONE_API_KEY') or 'YOUR_API_KEY'
# Set Pinecone environment. Find next to API key in console
env = os.getenv('PINECONE_ENVIRONMENT') or 'YOUR_ENVIRONMENT'
pinecone.init(api_key=api_key, environment=env)
#List all present indexes associated with your key, should be empty on the first run
pinecone.list_indexes()
[]
Installing other dependencies
!pip install -qU pip python-dateutil tensorflow==2.5 keras==2.4.0 scikit-learn matplotlib==3.1.0 seaborn
from collections import Counter
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from tensorflow import keras
from tensorflow.keras.models import Model
import tensorflow.keras.backend as K
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.metrics import confusion_matrix
We will use some of the code from a recent academic work. Let's clone the repository that we will use to prepare data.
!git clone -q https://github.com/rambasnet/DeepLearning-IDS.git
Define a New Pinecone Index
# Pick a name for the new service
index_name = 'it-threats'
# Make sure service with the same name does not exist
if index_name in pinecone.list_indexes():
pinecone.delete_index(index_name)
Create an index
pinecone.create_index(name=index_name, dimension=128, metric='euclidean')
Connect to the index
We create an index object, a class instance of pinecone.Index , which will be used to interact with the created index.
index = pinecone.Index(index_name=index_name)
Upload
Here we transform network events into vector embeddings, then upload them into Pinecone's vector index.
Prepare Data
The datasets we use consist of benign (normal) network traffic and malicious traffic
generated from several different network attacks. We will focus on web attacks only.
The web attack category consists of three common attacks:
- Cross-site scripting (BruteForce-XSS),
- SQL-Injection (SQL-Injection),
- Brute force administrative and user passwords (BruteForce-Web)
The original data was recorded over two days.
Download data for 22-02-2018 and 23-02-2018
Files should be downloaded to the current directory. We will be using one date for training and generating vectors, and another one for testing.
!wget "https://cse-cic-ids2018.s3.ca-central-1.amazonaws.com/Processed%20Traffic%20Data%20for%20ML%20Algorithms/Thursday-22-02-2018_TrafficForML_CICFlowMeter.csv" -q --show-progress
!wget "https://cse-cic-ids2018.s3.ca-central-1.amazonaws.com/Processed%20Traffic%20Data%20for%20ML%20Algorithms/Friday-23-02-2018_TrafficForML_CICFlowMeter.csv" -q --show-progress
Thursday-22-02-2018 100%[===================>] 364.91M 3.07MB/s in 2m 6s
Friday-23-02-2018_T 100%[===================>] 365.10M 3.07MB/s in 1m 53s
Let's look at the data events first.
data = pd.read_csv('Friday-23-02-2018_TrafficForML_CICFlowMeter.csv')
data.Label.value_counts()
Benign 1048009
Brute Force -Web 362
Brute Force -XSS 151
SQL Injection 53
Name: Label, dtype: int64
Clean the data using a python script from the cloned repository.
!python DeepLearning-IDS/data_cleanup.py "Friday-23-02-2018_TrafficForML_CICFlowMeter.csv" "result23022018"
cleaning Friday-23-02-2018_TrafficForML_CICFlowMeter.csv
total rows read = 1048576
all done writing 1042868 rows; dropped 5708 rows
Load the file that you got from the previous step.
data_23_cleaned = pd.read_csv('result23022018.csv')
data_23_cleaned.head()
Dst Port | Protocol | Timestamp | Flow Duration | Tot Fwd Pkts | Tot Bwd Pkts | TotLen Fwd Pkts | TotLen Bwd Pkts | Fwd Pkt Len Max | Fwd Pkt Len Min | ... | Fwd Seg Size Min | Active Mean | Active Std | Active Max | Active Min | Idle Mean | Idle Std | Idle Max | Idle Min | Label | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 22 | 6 | 1.519374e+09 | 1532698 | 11 | 11 | 1179 | 1969 | 648 | 0 | ... | 32 | 0.0 | 0.0 | 0 | 0 | 0.0 | 0.000000e+00 | 0 | 0 | Benign |
1 | 500 | 17 | 1.519374e+09 | 117573855 | 3 | 0 | 1500 | 0 | 500 | 500 | ... | 8 | 0.0 | 0.0 | 0 | 0 | 58786927.5 | 2.375324e+07 | 75583006 | 41990849 | Benign |
2 | 500 | 17 | 1.519374e+09 | 117573848 | 3 | 0 | 1500 | 0 | 500 | 500 | ... | 8 | 0.0 | 0.0 | 0 | 0 | 58786924.0 | 2.375325e+07 | 75583007 | 41990841 | Benign |
3 | 22 | 6 | 1.519374e+09 | 1745392 | 11 | 11 | 1179 | 1969 | 648 | 0 | ... | 32 | 0.0 | 0.0 | 0 | 0 | 0.0 | 0.000000e+00 | 0 | 0 | Benign |
4 | 500 | 17 | 1.519374e+09 | 89483474 | 6 | 0 | 3000 | 0 | 500 | 500 | ... | 8 | 4000364.0 | 0.0 | 4000364 | 4000364 | 21370777.5 | 1.528092e+07 | 41989576 | 7200485 | Benign |
5 rows × 80 columns
data_23_cleaned.Label.value_counts()
Benign 1042301
Brute Force -Web 362
Brute Force -XSS 151
SQL Injection 53
Name: Label, dtype: int64
Load the Model
Here we load the pretrained model. The model is trained using the data from the same date.
We have modified the original model slightly and changed the number of classes from four (Benign, BruteForce-Web, BruteForce-XSS, SQL-Injection) to two (Benign and Attack). In the step below we will download and unzip our modified model.
!wget -q -O it_threat_model.model.zip "https://drive.google.com/uc?export=download&id=1VYMHOk_XMAc-QFJ_8CAPvWFfHnLpS2J_"
!unzip -q it_threat_model.model.zip
model = keras.models.load_model('it_threat_model.model')
model.summary()
WARNING:tensorflow:SavedModel saved prior to TF 2.5 detected when loading Keras model. Please ensure that you are saving the model with model.save() or tf.keras.models.save_model(), *NOT* tf.saved_model.save(). To confirm, there should be a file named "keras_metadata.pb" in the SavedModel directory.
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 128) 10240
_________________________________________________________________
dense_1 (Dense) (None, 64) 8256
_________________________________________________________________
dense_2 (Dense) (None, 1) 65
=================================================================
Total params: 18,561
Trainable params: 18,561
Non-trainable params: 0
_________________________________________________________________
# Select the first layer
layer_name = 'dense'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
Upload Data
Let's define the item's ids in a way that will reflect the event's label. Then, we index the events in Pinecone's vector index.
from tqdm import tqdm
items_to_upload = []
model_res = intermediate_layer_model.predict(K.constant(data_23_cleaned.iloc[:,:-1]))
for i, res in tqdm(zip(data_23_cleaned.iterrows(), model_res), total=len(model_res)):
benign_or_attack = i[1]['Label'][:3]
items_to_upload.append((benign_or_attack + '_' + str(i[0]), res.tolist()))
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1042867/1042867 [01:43<00:00, 10067.22it/s]
import itertools
def chunks(iterable, batch_size=100):
it = iter(iterable)
chunk = tuple(itertools.islice(it, batch_size))
while chunk:
yield chunk
chunk = tuple(itertools.islice(it, batch_size))
You can lower the NUMBER_OF_ITEMS and, by doing so, limit the number of uploaded items.
NUMBER_OF_ITEMS = len(items_to_upload)
for batch in chunks(items_to_upload[:NUMBER_OF_ITEMS], 50):
index.upsert(vectors=batch)
items_to_upload.clear()
Let's verify all items were inserted.
index.describe_index_stats()
{'dimension': 128, 'namespaces': {'': {'vector_count': 1042867}}}
Query
First, we will randomly select a Benign/Attack event and query the vector index using the event embedding. Then, we will use data from different day, that contains same set of attacks to query on a bigger sample.
Evaluate the Rare Event Classification Model
We will use network intrusion dataset for 22-02-2018 for querying and testing the Pinecone.
First, let's clean the data.
!python DeepLearning-IDS/data_cleanup.py "Thursday-22-02-2018_TrafficForML_CICFlowMeter.csv" "result22022018"
cleaning Thursday-22-02-2018_TrafficForML_CICFlowMeter.csv
total rows read = 1048576
all done writing 1042966 rows; dropped 5610 rows
data_22_cleaned = pd.read_csv('result22022018.csv')
data_22_cleaned.head()
Dst Port | Protocol | Timestamp | Flow Duration | Tot Fwd Pkts | Tot Bwd Pkts | TotLen Fwd Pkts | TotLen Bwd Pkts | Fwd Pkt Len Max | Fwd Pkt Len Min | ... | Fwd Seg Size Min | Active Mean | Active Std | Active Max | Active Min | Idle Mean | Idle Std | Idle Max | Idle Min | Label | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 22 | 6 | 1.519288e+09 | 20553406 | 10 | 7 | 1063 | 1297 | 744 | 0 | ... | 20 | 1027304.0 | 0.0 | 1027304 | 1027304 | 1.952608e+07 | 0.000000e+00 | 19526080 | 19526080 | Benign |
1 | 34989 | 6 | 1.519288e+09 | 790 | 2 | 0 | 848 | 0 | 848 | 0 | ... | 20 | 0.0 | 0.0 | 0 | 0 | 0.000000e+00 | 0.000000e+00 | 0 | 0 | Benign |
2 | 500 | 17 | 1.519288e+09 | 99745913 | 5 | 0 | 2500 | 0 | 500 | 500 | ... | 8 | 4000203.0 | 0.0 | 4000203 | 4000203 | 3.191524e+07 | 3.792787e+07 | 75584115 | 7200679 | Benign |
3 | 500 | 17 | 1.519288e+09 | 99745913 | 5 | 0 | 2500 | 0 | 500 | 500 | ... | 8 | 4000189.0 | 0.0 | 4000189 | 4000189 | 3.191524e+07 | 3.792788e+07 | 75584130 | 7200693 | Benign |
4 | 500 | 17 | 1.519288e+09 | 89481361 | 6 | 0 | 3000 | 0 | 500 | 500 | ... | 8 | 4000554.0 | 0.0 | 4000554 | 4000554 | 2.137020e+07 | 1.528109e+07 | 41990741 | 7200848 | Benign |
5 rows × 80 columns
data_22_cleaned.Label.value_counts()
Benign 1042603
Brute Force -Web 249
Brute Force -XSS 79
SQL Injection 34
Name: Label, dtype: int64
Let's define a sample that will include all different types of web attacks for this specific date.
data_sample = data_22_cleaned[-2000:]
data_sample.Label.value_counts()
Benign 1638
Brute Force -Web 249
Brute Force -XSS 79
SQL Injection 34
Name: Label, dtype: int64
Now, we will query the test dataset and save predicted and expected results to create a confusion matrix.
y_true = []
y_pred = []
BATCH_SIZE = 100
for i in tqdm(range(0, len(data_sample), BATCH_SIZE)):
test_data = data_sample.iloc[i:i+BATCH_SIZE, :]
# Create vector embedding using the model
test_vector = intermediate_layer_model.predict(K.constant(test_data.iloc[:, :-1]))
# Query using the vector embedding
query_results = []
for xq in test_vector.tolist():
query_res = index.query(xq, top_k=50)
query_results.append(query_res)
ids = [res.id for result in query_results for res in result.matches]
for label, res in zip(test_data.Label.values, query_results):
# Add to the true list
if label == 'Benign':
y_true.append(0)
else:
y_true.append(1)
counter = Counter(match.id.split('_')[0] for match in res.matches)
# Add to the predicted list
if counter['Bru'] or counter['SQL']:
y_pred.append(1)
else:
y_pred.append(0)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [10:48<00:00, 32.44s/it]
# Create confusion matrix
conf_matrix = confusion_matrix(y_true, y_pred)
# Show confusion matrix
ax = plt.subplot()
sns.heatmap(conf_matrix, annot=True, ax = ax, cmap='Blues', fmt='g', cbar=False)
# Add labels, title and ticks
ax.set_xlabel('Predicted')
ax.set_ylabel('Acctual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Benign', 'Attack'])
ax.yaxis.set_ticklabels(['Benign', 'Attack'])
[Text(0, 0.5, 'Benign'), Text(0, 1.5, 'Attack')]
Now we can calculate overall accuracy and per class accuracy.
# Calculate accuracy
acc = accuracy_score(y_true, y_pred, normalize=True, sample_weight=None)
precision = precision_score(y_true, y_pred)
recall = recall_score(y_true, y_pred)
print(f"Accuracy: {acc:.3f}")
print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
Accuracy: 0.923
Precision: 0.995
Recall: 0.577
# Calculate per class accuracy
cmd = confusion_matrix(y_true, y_pred, normalize="true").diagonal()
per_class_accuracy_df = pd.DataFrame([(index, round(value,4)) for index, value in zip(['Benign', 'Attack'], cmd)], columns = ['type', 'accuracy'])
per_class_accuracy_df = per_class_accuracy_df.round(2)
display(per_class_accuracy_df)
type | accuracy | |
---|---|---|
0 | Benign | 1.00 |
1 | Attack | 0.58 |
We got great results using Pinecone! Let's see what happens if we skip the similarity search step and predict values from the model directly. In other words, let's use the model that created the embeddings as a classifier. It would be interesting to compare its and the similarity search approach accuracy.
from keras.utils.np_utils import normalize
import numpy as np
data_sample = normalize(data_22_cleaned.iloc[:, :-1])[-2000:]
y_pred_model = model.predict(normalize(data_sample)).flatten()
y_pred_model = np.round(y_pred_model)
# Create confusion matrix
conf_matrix = confusion_matrix(y_true, y_pred_model)
# Show confusion matrix
ax = plt.subplot()
sns.heatmap(conf_matrix, annot=True, ax = ax, cmap='Blues', fmt='g', cbar=False)
# Add labels, title and ticks
ax.set_xlabel('Predicted')
ax.set_ylabel('Acctual')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(['Benign', 'Attack'])
ax.yaxis.set_ticklabels(['Benign', 'Attack'])
[Text(0, 0.5, 'Benign'), Text(0, 1.5, 'Attack')]
# Calculate accuracy
acc = accuracy_score(y_true, y_pred_model, normalize=True, sample_weight=None)
precision = precision_score(y_true, y_pred_model)
recall = recall_score(y_true, y_pred_model)
print(f"Accuracy: {acc:.3f}")
print(f"Precision: {precision:.3f}")
print(f"Recall: {recall:.3f}")
Accuracy: 0.871
Precision: 1.000
Recall: 0.287
# Calculate per class accuracy
cmd = confusion_matrix(y_true, y_pred_model, normalize="true").diagonal()
per_class_accuracy_df = pd.DataFrame([(index, round(value,4)) for index, value in zip(['Benign', 'Attack'], cmd)], columns = ['type', 'accuracy'])
per_class_accuracy_df = per_class_accuracy_df.round(2)
display(per_class_accuracy_df)
type | accuracy | |
---|---|---|
0 | Benign | 1.00 |
1 | Attack | 0.29 |
As we can see, the direct application of our model produced much worse results. Pinecone's similarity search over the same model's embeddings improved our threat detection (i.e., "Attack") accuracy by over 50%!
Result summary
Using standard vector embeddings with Pinecone's similarity search service, we detected 85% of the attacks while keeping a low 3% false-positive rate. We also showed that our similarity search approach outperforms the direct classification approach that utilizes the classifier's embedding model. Similarity search-based detection gained 50% higher accuracy compared to the direct detector.
Original published results for 02-22-2018 show that the model was able to correctly detect 208520 benign cases out of 208520 benign cases, and 24 (18+1+5) attacks out of 70 attacks in the test set making this model 34.3% accurate in predicting attacks. For testing purposes, 20% of the data for 02-22-2018 was used.
As you can see, the model's performance for creating embeddings for Pinecone was much higher.
The model we have created follows the academic paper (model for the same date (02-23-2018)) and is slightly modified, but still a straightforward, sequential, shallow model. We have changed the number of classes from four (Benign, BruteForce-Web, BruteForce-XSS, SQL-Injection) to two (Benign and Attack), only interested in whether we are detecting an attack or not. We have also changed validation metrics to precision and recall. These changes improved our results. Yet, there is still room for further improvements, for example, by adding more data covering multiple days and different types of attacks.
Delete the Index
Delete the index once you are sure that you do not want to use it anymore. Once it is deleted, you cannot reuse it.
pinecone.delete_index(index_name)
Updated 4 months ago