In this work, a framework is proposed for decision fusion utilizing features extracted from vehicle images and their detected wheels. Siamese networks are exploited to extract key signatures from pairs of vehicle images. Our approach then examines the extent of reliance between signatures generated from vehicle images to robustly integrate different similarity scores and provide a more informed decision for vehicle matching. To that end, a dataset was collected that contains hundreds of thousands of side-view vehicle images under different illumination conditions and elevation angles. Experiments show that our approach could achieve better matching accuracy by taking into account the decisions made by a whole-vehicle or wheels-only matching network.