This thesis is a part of a broader project called ‘REDUCE-FOODITY’, which aims to study the extent of food waste that is generated in canteens. The work presented here concentrates on validating the accuracy of the projected food waste volumes obtained from RGB-D cameras against reliable manual ground truth measurements. To obtain ground truth volumes, the water displacement method was used on selected categories of food items that are most often wasted by the students. For each such item, the ground truth volume was compared to the RGB-D volume. The latter is calculated using the projected volume from the cameras and the YOLO model output. The ratios between the projected and true volumes were recorded and then plotted into range plots. The distribution exhibited a Gaussian behaviour, with an expected mean and standard deviation. This pattern with most values close to the mean would suggest that the model’s estimations are statistically consistent across different types of food, including those with visual distortions, including the void between layers or the structure itself, which can be registered in the projected volume reading by the camera. In addition, apparent or buoyant densities of the said food items were also calculated to explain systematic differences between RGB-D and ground truth volumes. The volume of food waste was measured using an RGB-D camera, water displacement, and weight-based estimation, and the results were compared to assess their accuracy. This study aims to validate the described computer vision system to offer a scalable device for systematic food waste monitoring.

This thesis is a part of a broader project called ‘REDUCE-FOODITY’, which aims to study the extent of food waste that is generated in canteens. The work presented here concentrates on validating the accuracy of the projected food waste volumes obtained from RGB-D cameras against reliable manual ground truth measurements. To obtain ground truth volumes, the water displacement method was used on selected categories of food items that are most often wasted by the students. For each such item, the ground truth volume was compared to the RGB-D volume. The latter is calculated using the projected volume from the cameras and the YOLO model output. The ratios between the projected and true volumes were recorded and then plotted into range plots. The distribution exhibited a Gaussian behaviour, with an expected mean and standard deviation. This pattern with most values close to the mean would suggest that the model’s estimations are statistically consistent across different types of food, including those with visual distortions, including the void between layers or the structure itself, which can be registered in the projected volume reading by the camera. In addition, apparent or buoyant densities of the said food items were also calculated to explain systematic differences between RGB-D and ground truth volumes. The volume of food waste was measured using an RGB-D camera, water displacement, and weight-based estimation, and the results were compared to assess their accuracy. This study aims to validate the described computer vision system to offer a scalable device for systematic food waste monitoring.

Quantifying Food Waste in University Mensa: A Computer Vision-Based Volumetric Approach.

CHATTERJEE, POORNENDU RANJAN
2024/2025

Abstract

This thesis is a part of a broader project called ‘REDUCE-FOODITY’, which aims to study the extent of food waste that is generated in canteens. The work presented here concentrates on validating the accuracy of the projected food waste volumes obtained from RGB-D cameras against reliable manual ground truth measurements. To obtain ground truth volumes, the water displacement method was used on selected categories of food items that are most often wasted by the students. For each such item, the ground truth volume was compared to the RGB-D volume. The latter is calculated using the projected volume from the cameras and the YOLO model output. The ratios between the projected and true volumes were recorded and then plotted into range plots. The distribution exhibited a Gaussian behaviour, with an expected mean and standard deviation. This pattern with most values close to the mean would suggest that the model’s estimations are statistically consistent across different types of food, including those with visual distortions, including the void between layers or the structure itself, which can be registered in the projected volume reading by the camera. In addition, apparent or buoyant densities of the said food items were also calculated to explain systematic differences between RGB-D and ground truth volumes. The volume of food waste was measured using an RGB-D camera, water displacement, and weight-based estimation, and the results were compared to assess their accuracy. This study aims to validate the described computer vision system to offer a scalable device for systematic food waste monitoring.
2024
Quantifying Food Waste in University Mensa: A Computer Vision-Based Volumetric Approach.
This thesis is a part of a broader project called ‘REDUCE-FOODITY’, which aims to study the extent of food waste that is generated in canteens. The work presented here concentrates on validating the accuracy of the projected food waste volumes obtained from RGB-D cameras against reliable manual ground truth measurements. To obtain ground truth volumes, the water displacement method was used on selected categories of food items that are most often wasted by the students. For each such item, the ground truth volume was compared to the RGB-D volume. The latter is calculated using the projected volume from the cameras and the YOLO model output. The ratios between the projected and true volumes were recorded and then plotted into range plots. The distribution exhibited a Gaussian behaviour, with an expected mean and standard deviation. This pattern with most values close to the mean would suggest that the model’s estimations are statistically consistent across different types of food, including those with visual distortions, including the void between layers or the structure itself, which can be registered in the projected volume reading by the camera. In addition, apparent or buoyant densities of the said food items were also calculated to explain systematic differences between RGB-D and ground truth volumes. The volume of food waste was measured using an RGB-D camera, water displacement, and weight-based estimation, and the results were compared to assess their accuracy. This study aims to validate the described computer vision system to offer a scalable device for systematic food waste monitoring.
Food waste
Sustainability
Volumetric analysis
Gaussian curve
A.I.
File in questo prodotto:
File Dimensione Formato  
Chatterjee_Poornendu Ranjan.pdf

Accesso riservato

Dimensione 3.05 MB
Formato Adobe PDF
3.05 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/87567