Whole slide imaging (WSI), also called digital virtual microscopy, is a new imaging modality. It allows for the application of AI and machine learning methods to cancer pathology to help establish a means for the automatic diagnosis of cancer cases. However, designing machine-learning models for WSI is computationally challenging due to its required ultra-high resolution. The current state-of-the-art models use multiple instance learning (MIL). MIL is a weakly-supervised learning method in which the model uses an array of inferences from many smaller instances to make a final classification about the entire set. In the context of WSI, researchers divide the ultra-high-resolution image into many patches. The model then classifies the slide based on an array of inferences from the patches. Among several ways of making the final classification, attention-based mechanisms have resulted in superb accuracy scores. The Transformer, one attention-based algorithm, has reported substantial improvements for WSI comprehension tasks. In this project, we studied and compared several WSI comprehension algorithms. We used the following three datasets: CAMELYON16+17, TCGALung, and TCGA-Kidney. We found that attention-based MIL algorithms performed better than standard MIL algorithms for classifying WSI images, achieving a higher mean accuracy and AUC. However, none of the attention-based algorithms performed significantly better than the others, reporting accuracy scores that varied widely. Presumably, it is due to the limited availability of training samples in the data corpus. Since it is not easy to increase the samples from human subjects, some machine learning techniques like transfer learning could help mitigate this issue.