- Letter to the Editor
- Open Access
Concerns regarding the validity of nutrition self-efficacy questionnaire among Iranian elderly population
Journal of Health, Population and Nutrition volume 41, Article number: 3 (2022)
There are some statistical concerns regarding a recently published article which has claimed to develop and psychometrically evaluate an instrument to assess the nutrition self-efficacy among Iranian elderly population.
We read with interest the article titled “Nutrition self-efficacy assessment: designing and psychometric evaluation in a community-dwelling elderly population” by Shamsalinia, Ghadimi  published in the Journal of Health, Population and Nutrition in 2019. Using mixed methods, the authors developed and psychometrically evaluated an instrument to assess the nutrition self-efficacy among Iranian elderly population. However, there are some serious concerns about the reported results that we intend to share with the editor.
Authors stated that “an EFA using principal components analysis was undertaken to explore the underlying structure of the NSEQ” (, P. 5). However, exploratory factor analysis (EFA) and principal component analysis (PCA) are two different methods for different purposes [2, 3]. Although in some studies EFA and PCA incorrectly have been used interchangeably (See ), as Fokkema and Greiff  stated “PCA should never be referred to as (exploratory) factor analysis” (p. 401). Indeed, PCA is more suitable for reducing observed variables into smaller groups of components rather than exploratory extracting underlying factors (latent constructs) .
Also, to assess construct reliability, convergent validity and discriminant validity of the instrument, authors claimed to compute composite reliability (CR), average variance extracted (AVE), maximum shared variance (MSV), and average shared variance (ASV) using the results obtained from PCA. While conducting PCA or even EFA to compute CR, AVE, and MSV is questionable, the computed values for the measures are not consistent with the factor loadings reported in the paper . We prepared Table 1 in which CR and AVE values reported in the paper are presented in correspondence with our computed values using the reported factor loadings of the three constructs following the formulas below (See [5,6,7]).
where i is the number of items ranging from 1 to n. n represents total number of items and Li represents the standardized factor loading of item number i.
where i denotes the number of items ranging from 1 to n. n represents total number of items and Li represents the standardized factor loading of the item number i. ei is the unexplained variance of item number i by the construct.
As it is shown, none of the computed AVE values meet the convergent validity threshold of AVE greater than 0.5. Moreover, contrarily to the claimed statement in the paper, CR of information effectiveness is below 0.7 deviating from the construct reliability requirement. Therefore, in contrast to the authors’ claim, this study has failed to introduce a reliable construct to measure nutrition self-efficacy among Iranian elderly population.
Besides, our concern continues further to the reported ASV and MSV. Shared variance is the square of the correlation between any two constructs. Therefore, ASV of a construct is the mean of the square of the correlation between the construct and other constructs. Also, MVS of a construct is the largest square of the correlation between the construct and other constructs [6, 8]. In the study by Shamsalinia et al.  there are three constructs in the measurement model and accordingly, there are three covariances between the three constructs. As three different MSV values have been reported in the results, each of the MSV values basically is one of the shared variances between two of the constructs. This means that the reported ASV value for each of the three constructs should be the mean of two of the MSV values. In other words, based on the reported MSV values, ASV values should be 0.353, 0.355, and 0.374 which are different from the results reported in the paper (i.e., 0.329, 0.349, and 0.358). There are more concerns about the reported results. For example, rather than reporting both lower bound and upper bound, only one value for the 95% confidence intervals for Cronbach’s alpha is reported. Also, it is not clear what 0.865 and 0.896 are in the Spearman rank-order correlation coefficient table.
The existence of high level of statistical errors in medical journals, intentionally or unintentionally, has been always caused much concern. Construct reliability and validity lie at the heart of competent and effectiveness of an instrument [9, 10]. Due to the salient statistical errors in assessing the reliability and validity of the construct, it remains a big concern if this construct is valid to be accessed for future research. Moreover, having it published in an open access journal, it amplifies the importance to warn the irreparable damage it may cause. Providing the collected data of the published paper to the reader might be a solution to prevent such manipulations in the future.
Availability of data and materials
Average variance extracted
Average shared variance
Exploratory factor analysis
Maximum shared variance
Principal component analysis
Shamsalinia A, Ghadimi R, Chafjiri RT, Norouzinejad F, Pourhabib A, Ghaffari F. Nutrition self-efficacy assessment: designing and psychometric evaluation in a community-dwelling elderly population. J Health Popul Nutr. 2019;38(1):38.
Alavi M, Visentin DC, Thapa DK, Hunt GE, Watson R, Cleary M. Exploratory factor analysis and principal component analysis in clinical studies: which one should you use? J Adv Nurs. 2020;76(8):1886–9.
Park HS, Dailey R, Lemus D. The use of exploratory factor analysis and principal components analysis in communication research. Hum Commun Res. 2002;28(4):562–77.
Fokkema M, Greiff S. How performing PCA and CFA on the same data equals trouble: overfitting in the assessment of internal structure and some editorial thoughts on it. Eur J Psychol Assess. 2017;33(6):399–402.
Hair JF, Black WC, Babin BJ, Anderson RE. Multivariate data analysis: Pearson new international edition PDF eBook. Upper Saddle River: Pearson Education; 2013.
Fornell C, Larcker DF. Evaluating structural equation models with unobservable variables and measurement error. J Mark Res. 1981;18(1):39–50.
Pahlevan Sharif S, Sharif Nia H. Factor analysis and Structural equation modeling with SPSS and AMOS. Tehran: Jame-e-Negar; 2021.
Farrell AM. Insufficient discriminant validity: a comment on Bove, Pervan, Beatty, and Shiu (2009). J Bus Res. 2010;63(3):324–7.
Clark LA, Watson D. Constructing validity: basic issues in objective scale development. Washington, DC: American Psychological Association; 2016. p. 187–203.
Thanasegaran G. Reliability and validity issues in research. Integr Dissem. 2009;4:1–7.
Ethics approval and consent to participate
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised: the authors reported an error in the affiliation of the author Sharif Nia.
About this article
Cite this article
Pahlevan Sharif, S., Naghavi, N. & Sharif Nia, H. Concerns regarding the validity of nutrition self-efficacy questionnaire among Iranian elderly population. J Health Popul Nutr 41, 3 (2022). https://doi.org/10.1186/s41043-022-00282-1
- Exploratory factor analysis
- Construct validity