Establishing Normative Liver Volume and Attenuation for the Population of a Large Developing Country Using Deep Learning – A Retrospective Study of 1,800 CT Scans


Deep learning has enabled the analysis of large datasets which previously required significant manual labour. We used a deep learning algorithm to study the distribution of liver volumes and attenuation of a massive dataset of ~1,800 non-contrast CTs (NCCTs) of the abdomen. Specifically, we aim to establish the normative values of hepatic volume and attenuation in patients with no known pathologies and understand their correlations with age and sex. Using hepatic attenuation as an imaging biomarker, we also investigate the prevalence of fatty liver disease at the study site and compare with known prevalence rates.


Abdomen CTs acquired retrospectively from the last 3 years were used for the study. Natural Language Processing (NLP) algorithms were developed to identify patients whose radiology reports did not indicate any pathologies of the liver. Non-contrast abdomen CT of the same patients was extracted from the PACS and processed using deep learning models to obtain the liver volume (LV) and mean liver attenuation (MLA).

Liver volume (LV) and mean liver attenuation (MLA) were estimated using a deep learning-based segmentation model which accurately identifies liver voxels from the CT scan, and subsequently calculates LV and MLA. The deep learning algorithm used a multi-stage 3D U-Net architecture (Fig 1) and was trained on 527 patient images that were manually annotated by an expert radiologist. By leveraging two resizing parameters, the multi-stage architecture first extracts the region of interest of the liver which is then used for fine boundary delineation by the subsequent model. This approach helps reduce false positives from neighbouring regions such as spleen and stomach. The algorithm was tested independently on 130 CTs from LiTS challenge and gave a dice score of 95%, and mean volume error of 3.8%. Representative segmentations are shown in Fig 2.

The patient images were anonymized and processed on workstations equipped with Nvidia GeForce GTX 1070 GPUs with 8GB of graphic memory. Each study took 7-10mins to process given the large size of the imaging data, and the entire process was completed in 7 days using multiple workstations.

Additional patient information such as sex and age were obtained from the clinical records and collated with the obtained liver volume and mean liver attenuation for the final analysis. Appropriate statistical analysis (correlations, histogram etc.) were performed on the LV, MLA and estimated prevalence of fatty liver was calculated using a cut-off of 40HU as the reference standard.


1,823 NCCTs of the abdomen with no liver or related abnormality on clinical reports were extracted from the PACS system. 107 (6%) NCCTs failed the algorithm’s quality check and were excluded from the study, resulting in a total of 1,715 NCCTs for the analysis. The demographic distribution of age and gender were available for 1626 patients. There were 775 males and 851 females, with a mean age of 44.4yrs. The average liver volume (LV) was 1,389mL (Standard Deviation: 473mL, Range: 201 – 3946mL), while the average mean liver attenuation was 59.2HU (Standard Deviation: 15.9HU, Range: 24.2 – 125.6HU). (Fig 3). There was no strong correlation between volume and age for both men (R2: 0.002) and women (R2: 0.0001).122 of 1715 (59% males and 41% females) had fatty liver defined as mean liver attenuation less than 40HU. Over 80% of patients with mean liver attenuation less than 40HU were in the 35-75 age group, with 27.2% aged between 55-65yrs. (Fig 4).


Automated analysis using deep learning algorithms can help parse through massive datasets automatically and shed light on important clinical questions such as the establishment of age- and sex-correlated normative values. We establish new normative values for LV and MLA and quantify the prevalence of fatty liver.

The EPOS can be viewed here: