Machine-learning algorithms are being employed in an increasing range of applications, spanning high-performance and energy-constrained platforms. It has been noted that the statistical nature of the algorithms can open up new opportunities for throughput and energy efficiency, by moving hardware into design regimes not limited to deterministic models of computation. This work aims to enable high accuracy in machine-learning inference systems, where computations are substantially affected by hardware variability. Previous work has overcome this by training inference model parameters for a particular instance of variation-affected hardware. Here, training is instead performed for the distribution of variation-affected hardware, eliminating the need for instance-by-instance training. The approach is referred to as Stochastic Data-Driven Hardware Resilience (S-DDHR), and it is demonstrated for an in-memory-computing architecture based on magnetoresistive random-access memory (MRAM). S-DDHR successfully address different samples of stochastic hardware, which would otherwise suffer degraded performance due to hardware variability.