In the first part of this paper we discuss an importance sampling estimator (first suggested by Liu) which is known as the "flip estimator." Unlike most importance sampling estimators this estimator enables us to reduce sampling error without the need to simulate new data. We extend the cases where this estimator can be used and devise a way to find the optimal flip estimator for a given distribution. We then employ a cross-entropy approach to automate the search for the optimal flip estimator In the second part of this paper we propose and discuss a new type of kernel density estimator called the "double flipped kernel density estimator." Using kernel density estimators one can reconstruct unknown distributions from data, but the results are usually heavily dependent upon "nearby" data and, as a result, are unusable for rare-event estimation. The double flipped kernel density estimator overcomes this problem and can be used in heavy-tailed and rare-event settings In the final part of this paper we propose a new perfect independent Metropolis-Hastings algorithm that is applicable in cases where the conventional independent Metropolis-Hastings algorithm can not be used or is difficult to implement. We then use this framework to devise a self correcting forward Metropolis-Hastings algorithm