Machine learning has become a popular tool to help us make better decisions and predictions, based on experiences, observations, and analyzing patterns, within a given data set without explicit functions. In this paper, we describe an application of the supervised machine-learning algorithm to the extinction regression for the second Gaia data release, based on the combination of the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, Sloan Extension for Galactic Understanding and Exploration, and the Apache Point Observatory Galactic Evolution Experiment. The derived extinction in our training sample is consistent with other spectrum-based estimates, and its standard deviation of the cross-validations is 0.0127mag. A blind test is carried out using the RAdial Velocity Experiment catalog, and the standard deviation is 0.0372mag. Such a precise training sample enables us to regress the extinction, E(BP-RP), for 133 million stars in the second Gaia data release. Of these, 106 million stars have the uncertainties less than 0.1mag, which suffer less bias from the external regression. We also find that there are high deviations between the extinctions from photometry-based methods, and between spectrum- and photometry-based methods. This implies that the spectrum-based method could bring more signal to a regressing model than multiband photometry, and a higher signal-to-noise ratio would acquire a more reliable result.