This paper introduces a new way to extract a set of representative points from a continuous dis- tribution. These points are generated by minimizing the Kullback-Leibler divergence, which is an information-based measure of the disparity between two probability distributions. We refer to these points as Kullback-Leibler points. Based on the link between the total variation and the Kullback-Leibler divergence, we prove that the empirical distribution of Kullback-Leibler points converges to the target distribution. Additionally, we illustrate that Kullback-Leibler points have advantages in simulations when compared with representative points generated by Monte Carlo or other deterministic sampling methods. In addition, to prevent the frequent eval- uation of an expensive distribution, an adaptive version of Kullback-Leibler points is proposed, which adaptively updates the representative points by sequentially learning about the complex or unknown distribution. Kullback-Leibler points can be used in the simulation of complex probability densities and the exploration and optimization of expensive black-box functions.