This study explores strategies for long-term reservoir simulations by combining generic rule-based reservoir management model (RMM) and machine learning (ML) models for two major multipurpose reservoirs — Allatoona Lake and Lake Sidney Lanier in the southeastern United States. First, a standalone RMM is developed to simulate daily release and storage during Water Year 1981–2015. Next, using Long-Short Term Memory (LSTM) as the ML technique, a standalone LSTM model is trained based on reservoir inflow and meteorological observations to simulate reservoir release and estimate reservoir storage through water balance calculation. Three hybrid modeling strategies are developed, one using RMM output as an additional LSTM input (H1), another using LSTM as the initial release estimate in RMM (H2), and the third combining the first two strategies (H3). The Nash–Sutcliffe efficiency (NSE) for release (NSE-r), storage (NSE-s), and their mean (NSE-avg) are used for model evaluation. Overall, H1 improves NSE-r to 0.65 and 0.54 for Allatoona and Lanier, respectively, compared to standalone RMM (0.44 and 0.21); however, its storage trajectory did not produce a physically feasible solution, similar to LSTM. H2 and especially H3 show that they can retain the best features from RMM and LSTM, with H3 NSE-avg being 0.695 and 0.55 for Allatoona and Lanier outperforming RMM (0.615 and 0.29). The findings suggest a robust simulation capacity for large-scale water management in future studies.