There is a growing demand for an intelligent system to continually learn knowledge from a data stream. Continual learning requires both the preservation of previous knowledge (i.e., avoiding catastrophic forgetting) and the acquisition of new knowledge. Different from previous works that focus only on model adaptation (e.g., regularization, network expansion, memory rehearsal, etc.), we propose a novel training scheme named acquisitive learning (AL), which emphasizes both the knowledge inheritance and knowledge acquisition. AL starts from an elaborately selected model with pre-trained knowledge (the inherited model) and then adapts it to new data using segmented training. The selection is achieved by injecting random noise to various inherited models for better model robustness, which promises higher accuracy in further knowledge acquisition. The approach is validated by the visualization of the loss landscape and quantitative roughness measurement. The combination of the selective inherited model and knowledge acquisition reduces catastrophic forgetting by 10X on the CIFAR-100 dataset.