Shape Skeleton Transformation Tools
A dataset for evaluating one-shot categorization of novel object classes

The following code shows how to create shape variants by manipulating a shape's skeletal representation. The skeletal representation decomposes the shape into parts. Here we vary the relationship of the lengths, orientations, widths, and positions of all these parts to create new shape variants.
Such new variants may be useful in producing stimuli for a variety of psychophysical tasks. As an example, the code can be used to create classes of objects with similar/dissimilar statistics (e.g., by choosing the distribution of parameters of how these parts are sampled).
Related Papers
-
Morgenstern, Y., Schmidt, F., & Fleming, R. W. (2020). A dataset for evaluating one-shot categorization of novel object classes. Data in Brief, 105302. doi.org/10.1016/j.dib.2020.105302
- Morgenstern, Y., Schmidt, F., & Fleming, R. W. (2019). One-shot categorization of novel object classes in humans. Vision Research, 165, 98-108. doi.org/10.1016/j.visres.2019.09.005
ShapeComp Model
An image-computable model of perceived shape similarity

ShapeComp is a model of 2D shape similarity that integrates numerous shape metrics and is highly predictive of human perceptual judgments. The mulitidimensional feature space allows organising shapes by their perceived similarities, making it easy to create perceptually uniform stimulus sets. The model and accompanying code sets include tools for synthesising new shapes using a GAN trained on >25,000 animal silhouettes. This could be useful for perceptual experiments in the lab or online, brain imaging, or training deep learning systems.
For more details check out the ShapeComp website and the papers below.
Related Papers
-
Morgenstern, Y., Hartmann, F., Schmidt, F., Tiedemann, H., Prokott, E., Maiello, G., & Fleming, R. W. (2021). An image-computable model of human visual shape similarity. PLOS Computational Biology, 17(6), e1008981. doi.org/10.1371/journal.pcbi.1008981
-
Morgenstern, Y., Storrs, K. R., Schmidt, F., Hartmann, F., Tiedemann, H., Wagemans, J. & Fleming, R. W. (2024) The statistics of natural shapes predict high-level aftereffects in human vision Current Biology, 34(5), 1098-1106. doi.org/10.1016/j.cub.2023.12.039
Liquids
Sets of high-quality liquid animations and machine learning sets.
Observers are remarkably good at visually inferring the viscosity of flowing fluids. They use multiple midlevel shape and motion features to do so. How they choose, identify, compute and combine these features is an important question.
Here, we provide two smaller sets of high-quality renderings of 10s liquid simulations (and corresponding perceptual data) together with another massive set of 20.000 lower-quality renderings of 0.67s liquid simulations (and corresponding trained deep neural networks and perceptual data), optimized for machine learning purposes. The data can be used to investigate visual perception of liquids as well as (machine learning) models of liquid viscosity from dynamic animations and static images.
Related Papers
-
van Assen, J. J. R., Nishida, S. Y., & Fleming, R. W. (2020). Visual perception of liquids: Insights from deep neural networks. PLOS Computational Biology, 16(8), e1008018. doi.org/10.1371/journal.pcbi.1008018
-
Van Assen, J. J. R., Barla, P., & Fleming, R. W. (2018). Visual features in the perception of liquids. Current Biology, 28(3), 452-458. doi.org/10.1016/j.cub.2017.12.037
- van Assen, J. J. R., & Fleming, R. W. (2016). Influence of optical material properties on the perception of liquids. Journal of Vision, 16(15), 12-12.doi.org/10.1167/16.15.12
STUFF dataset
Photographs of 200 categories of material

This dataset is the result of a systematic method for identifying material categories and images to mirror the full richness and complexity of material appearances in the real world. For this, we distilled concrete nouns in the American English language into 200 distinct concepts spanning materials as diverse as algae, brass, ebony, fleece, oil, rubber, and zinc. Then, we collected three high-quality, close-up, naturalistic photographs of each material concept in its typical aggregate state and form (e.g., liquid oil or grains of salt), including close-ups of object surfaces. The resulting 600 images of materials were cropped to square aspect ratio. We also provide an extended version with 15+ images per concept.
For a detailed description of how the dataset was constructed please refer to the paper below.
The STUFF Dataset: Photographs of 200 categories of material, 600 naturalistic material images (3 images per material category), 350 x 350 pixel
The STUFF Enhanced Dataset: Photographs of 200 categories of material, 3514 naturalistic material images (15+ images per material category), full resolution
Related Papers
- Schmidt, F., Hebart, M. N., Schmid, A. C., & Fleming, R. W. (2025). Core dimensions of human material perception. Proceedings of the National Academy of Sciences, 122(10), e2417202122. doi.org/10.1073/pnas.2417202122