As every card player knows, most existing card games share a large number of actions and situations. This is the case, for instance, for stacking cards in columns according to some allowed sequence or taking cards from a deal. This is true for both multi-player and solitaire patience games. Although they have such strong similarities, every game also has some peculiarity making it different, and affecting its complexity and -at the end of the day- its enjoyability. Interestingly, from an AI planning perspective, most of the differences emerge from the problem description: domain models tend to be very similar because of the similar actions that can be performed. In this paper we envisage the exploitation of solitaire card games as a pool of interesting benchmarks. In order to "access" such benchmarks, we exploit state-of-the-art tools for automated domain model generation -LOCM and ASCoL- for creating domain models corresponding to a number of solitaires, and extracting the underlying game constraints (e.g., the initial setup, stacking rules etc.) which come from problem models. The contribution of our work is twofold. On the one hand, the analysis of the generated models, and the learning process itself, gives insights into the strengths and weaknesses of the approaches, highlighting lessons learned regarding sensitivity to observed traces. On the other hand, an experimental analysis shows that generated solitaires are challenging for the state-of-the-art of satisficing planning: therefore solitaires can provide a set of interesting and easy-to-extract benchmarks.