Several large scale datasets, coupled with advances in deep neural network architectures have been greatly successful in pushing the boundaries of performance in semantic segmentation in recent years. However, the scale and magnitude of such datasets prohibits ubiquitous use and widespread adoption of such models, especially in settings with serious hardware and software resource constraints. Through this work, we propose two simple variants of the recently proposed IDD dataset, namely IDD-mini and IDD-lite, for scene understanding in unstructured environments. Our main objective is to enable research and benchmarking in training segmentation models. We believe that this will enable quick prototyping useful in applications like optimum parameter and architecture search, and encourage deployment on low resource hardware such as Raspberry Pi. We show qualitatively and quantitatively that with only 1 hour of training on 4GB GPU memory, we can achieve satisfactory semantic segmentation performance on the proposed datasets.