Handwriting recognition is still an unsolved problem in machine learning due to a number of factors, including variability in handwritten texts. Most of the proposed methods in the literature to reduce the variability are based on handcrafted or learned preprocessing techniques. We advocate another hypothesis, suggesting that the variability in handwriting is better handled within the recognition system. We propose a general framework for an explicit integration of the variability factors as hidden parameters in the modeling. We apply our approach on Arabic and Latin datasets (NIST OpenHaRT, READ 2016, READ 2017). We improve performance of three handwriting recognition systems based on HMMs (Hidden Markov Models), BLSTMs (Bi-directional Long Short-Term Memory) and CRNNs (Convolutional Recurrent Neural Networks).