Accurate simultaneous localization and mapping (SLAM) is fundamental to autonomous navigation, as it requires both estimating the robot’s motion and consistently constructing or aligning a map. However, most existing datasets are collected in feature rich environments and do not adequately address repetitive scenes that cause perceptual aliasing, drift in odometry, and failures in loop closure. We present a new multi-sensor dataset specifically designed to evaluate SLAM performance in repetitive scene environments. It includes two representative scenarios: a riverside bikepath that exhibits frame-level repetition and an urban development district that presents block-level repetition. Additional campus sequences are provided as non-repetitive baselines. The platform integrates two 3D LiDARs, an RGB-D camera, three IMUs, and GNSS. By explicitly incorporating both frame- and block-level repetitive patterns, this dataset enables systematic analysis of perceptual aliasing in SLAM and serves as a reproducible benchmark for developing robust SLAM systems.