Skip to main content
  • Original research article
  • Open access
  • Published:

Readability and quality assessment of internet-based patient education materials related to nasal septoplasty

Abstract

Background

Given that nasal septoplasty is a common procedure in otolaryngology – head and neck surgery, the objective of this study was to evaluate the quality and readability of online patient education materials on septoplasty.

Methods

A Google search was performed using eight different search terms related to septoplasty. Six different tools were used to assess the readability of included patient education materials. These included the Flesch-Kincaid Grade Level, Flesch Reading Ease, Gunning-Fog Index, Simple Measure of Gobbledygook Index, Coleman-Liau Index, and Automated Readability Index. The DISCERN tool was used to assess quality and reliability.

Results

Eighty-five online patient education materials were included. The average Flesch-Reading Ease score for all patient education materials was 54.9 ± 11.5, indicating they were fairly difficult to read. The average reading grade level was 10.5 ± 2.0, which is higher than the recommended reading level for patient education materials. The mean DISCERN score was 42.9 ± 10.5 and 42% (36/85) of articles had DISCERN scores less than 39, corresponding to poor or very poor quality.

Conclusion

The majority of online patient education materials on septoplasty are written above the recommended reading levels and have significant deficiencies in terms of their quality and reliability. Clinicians and patients should be aware of the shortcomings of these resources and consider the impact they may have on patients’ decision making.

Background

Nasal septoplasty is considered the definitive treatment for patients with septal deviation and is one of the most common procedures performed by otolaryngologists – head and neck surgeons [1,2,3]. Septoplasty not only corrects septal deviation but also improves access and visualization during endoscopic sinus surgery [4]. Despite being a common procedure, patient satisfaction rates after septoplasty vary from 65 to 80%, suggesting not all patients are completely satisfied with the results [5]. Furthermore, the procedure carries a risk of complications including septal perforation, septal hematoma, and synechiae [6, 7]. Thus, patient education is critical to ensuring patients understand the benefits, risks, and expected outcomes after septoplasty.

Patients commonly use the internet to supplement information provided by their physician, with one in five patients using the internet to obtain medical information prior to their medical appointment [8, 9]. However, it is estimated that nearly half of Canadian and American adults have limited ability to understand and act upon health information [10, 11]. Thus, even if online materials are evidence-based and accurate, they can be limited by the use of medical jargon and other technical language, which make them difficult for patients with limited medical knowledge to comprehend. It has been estimated that the average American adult reads at an eighth grade level, and thus the American Medical Association (AMA) currently recommends that patient education materials (PEMs) be written for a grade six level audience [12]. However, studies have shown that online PEMs are often written at a much higher literacy level [13,14,15,16]. Within the realm of otolaryngology-head and neck surgery (OHNS), several studies have revealed that online information on OHNS procedures and conditions are written above the recommended grade level and are lacking in terms of quality [17,18,19,20,21,22,23]. Since septoplasty is a common OHNS procedure, it is important that clinicians evaluate the information patients are accessing online about their surgery. Thus, the objective of this study was to assess the quality and readability of the PEMs available online related to nasal septoplasty.

Methodology

Search

This study did not require Research Ethics Board approval, given the publicly available nature of the information. The search was conducted using Google (www.google.ca) in Ottawa, Ontario on May 1st, 2020. Google was chosen as it is the most commonly used search engine in North America [24]. Prior to initiating the search, all search history, cache, and cookies were cleared. Furthermore, the location settings were disabled and the search was performed using Google Chrome in incognito mode. This was done to minimize the influence of previous search history and location on the search results [25]. Eight search terms were used: “septoplasty”, “septoplasty patient information”, “deviated septum surgery”, “deviated septum surgery patient information”, “nasal septum surgery”, “nasal septum surgery patient information”, “nasal septum repair”, “nasal septum repair patient information”. The first 50 search results from each search term were reviewed.

Inclusion and exclusion criteria

All search results that were PEMs about septoplasty were included. Exclusion criteria included: websites not written in English, websites where the content was not accessible, audiovisual material, blogs, scientific webpages and articles (i.e. PubMed), webpages geared toward medical professionals, advertisements, websites containing less than 100 words of patient information, and websites without patient information pertaining to septoplasty.

Categorization of sources

The results were categorized into six categories based on whether they originated from: 1) academic institutions, 2) private medical clinics, 3) professional organizations, 4) government websites, 5) medical information websites (i.e. WebMD) and 6) other miscellaneous sources (i.e. Wikipedia).

Outcome measures

Readability evaluation

Microsoft Word (Microsoft Corp, Redmond, WA, USA) was used to edit the text from included webpages. All formatting elements were removed in this editing process. An online calculator (https://readable.com/) was used to evaluate readability. The following scores were used to assess readability: Flesch Reading Ease (FRE), Flesch Kincaid Grade Level (FKG), Gunning-Fog Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook Index (SMOG), and Automated Readability Index (ARI) [26,27,28]. Table 1 shows the formula used to calculate the scores [29]. This study employed a comprehensive selection of readability formulas that allowed us to account for several different parameters which impact readability such as word count, syllables, letters per 100 words and sentences per 100 words. Similarly, many previous studies have included these same readability indices when assessing PEMs [20, 21, 23, 26, 27, 30, 31].

Table 1 Instruments and calculations used to assess readability

The FKG, GFI, CLI, SMOG, and ARI measured the academic grade level necessary to comprehend the text, with a higher grade level corresponding to text that was more difficult to read. For example, an FKG score of 6 suggests that one would need to have a sixth-grade reading level in order to comprehend the text. The FRE scores ranged from a 0 to 100 with a higher score corresponding to text that is easier to read (Table 2) [32].

Table 2 Flesch Reading Ease Score Interpretation

Quality patient education material

DISCERN is a tool used to evaluate the quality of PEMs [33]. The DISCERN tool consists of 16 questions, which each assess specific criteria. All questions are graded on a scale of 1 through 5 with a score of 1 indicating that the criteria was not met, a score of 2 to 4 indicating that the criteria was partially met, and a score of 5 indicating the criteria was fully fulfilled [33]. The DISCERN tool is further divided into two distinct sections. Reliability is assessed with questions 1 to 8 in the DISCERN tool. The quality of the information on treatment options is assessed with questions 9 to 15. Question 16 is an overall rating of the publication. Total DISCERN scores were calculated from the sum of scores on the 16 questions with a possible range from 15 to 80. Table 3 describes the interpretation of the total DISCERN scores [34]. Two raters (C.H, K.A) independently evaluated the DISCERN scores for each included PEM. Discrepancies were resolved by a third reviewer (E.G). The average scores of the final reconciled DISCERN scores are reported.

Table 3 DISCERN Scores

Statistical analyses

Frequencies and proportions were used to report categorical variables, whereas means and standard deviations were used for continuous variables. Separate analyses were conducted to determine if quality and readability differed depending on the origin of the PEMs. These were compared using the Kruskal Wallis test, followed by Dunn-Bonferroni post hoc tests. The weighted kappa (κ) statistic was used to determine interrater reliability for the DISCERN scoring. Statistical analyses were performed using SPSS (v26.0, IBM Corp, Armonk, NY, USA), with statistical significance set to p < 0.05.

Results

Search results

Four hundred web pages were retrieved from the search. After the removal of 249 duplicates and excluding 66 web pages, 85 PEMs met the inclusion criteria. Sixty six percent (56/85) of the PEMs originated from the United States, 9.4% (8/85) were from Canada, 11% (9/85) were from the UK, and 14% (12/85) were from other countries. Of the included PEMs, 42% (36/85) originated from academic institutions, 32% (27/85) from private medical clinics, 2.3% (2/85) from professional organizations, 14% (12/85) from medical information websites, 3.5% (3/85) from government websites, and 5.9% (5/85) from miscellaneous sources. Forty-five percent (38/85) of web pages appeared in the search results for multiple search terms.

Readability

The mean FRE score for all included PEMs was 54.9 ± 11.5 with a range of 35.1 to 78.3. Sixty-eight percent (58/85) had FRE scores below 60. The mean reading grade levels as determined by the FKG, GFI, CLI, SMOG, and ARI scores are displayed in Fig. 1. The average reading grade level determined by all five scores was 10.5 ± 2.0. Table 4 demonstrates the mean readability scores from each source across all six readability indices stratified by the origin of the PEM. PEMs from miscellaneous sources had the highest reading grade levels across all six unique readability indices. PEMs from government websites had the lowest average reading grade levels (Table 4). PEMs originating from academic institutions had significantly higher FRE scores (p = 0.002) and lower reading grade levels than PEMs originating from private clinics according to the FKG (p = 0.002), GFI (p = 0.003), CLI (p = 0.002), SMOG (p = 0.009), and ARI (p = 0.005). PEMs from academic institutions had significantly lower reading grade levels than those originating from miscellaneous sources according to the SMOG (p = 0.04) and GFI (p = 0.03).

Fig. 1
figure 1

Average Reading Grade Levels. The solid black line represents the eighth grade reading level and the dashed black line represents the sixth grade reading level. FKG: Flesch-Kincaid Grade Level, GFI: Gunning-Fog Index, CLI: Coleman-Liau Index, SMOG: Simple Measure of Gobbledygook Index, ARI: Automated Readability Index

Table 4 Mean readability scores with standard deviations according to origin

Discern

The mean total DISCERN score was 42.9 ± 10.5. The weighted κ statistic for total DISCERN scores was 0.95. Each question included in the DISCERN instrument is scored from 1 to 5 and the average score for each question is displayed in Table 5. Forty-two percent (36/85) of articles had total DISCERN scores less than 39, indicating they were of “poor” or “very poor” quality. Figure 2 demonstrates the DISCERN scores for the PEMs based on their origin. PEMs originating from academic institutions had significantly higher reliability scores than those originating from private clinics (p = 0.017). Additionally, PEMs originating from medical information websites had significantly higher reliability scores than those from private clinics (p = 0.002).

Table 5 Average score for each item in the DISCERN instrument
Fig. 2
figure 2

DISCERN rating of PEMs categorized by origin. The overall average DISCERN score, reliability score, and quality scores are shown. AI: Academic Institutions, PO: Professional Organizations, ME: Medical Information Websites, MI: Miscellaneous, GW: Government Websites, PC: Private Clinics

Discussion

It has been shown that providing information leaflets prior to septoplasty resulted in a positive impact on patient understanding of their procedure when compared to verbal instructions [35]. Being knowledgeable about their treatment plans results in patients taking a more active role in decision making and has been shown to improve outcomes [36, 37]. Thus, we hypothesize that written information regarding septoplasty is critical in facilitating the shared decision-making process and increasing patient satisfaction. However, much of the written information patients are accessing about their treatments originates from the internet. Although this information is widely accessible, it remains largely unregulated resulting in health information of variable quality and credibility as demonstrated by several other authors [20, 21, 38, 39]. Furthermore, even websites that provide high quality, evidence-based information, need to be written at a level appropriate for patients without extensive medical knowledge. In addition to quality, the readability of PEMs is an important consideration to ensure patients can understand and apply information related to their conditions and treatments.

PEMs on septoplasty were above the recommended grade level across all six readability indices used. Given that each readability index accounts for different criteria, the use of multiple indices strengthens this finding. On average, PEMs on septoplasty were written at approximately a tenth grade reading level, which exceeds both the sixth grade reading level recommendation from the American Medical Association and the eighth grade reading level of the average American adult [12]. Furthermore, 68% had FRE scores below 60 indicating that they were “fairly difficult” to “very difficult” to read. Similarly, Cherla et al. conducted a study on the readability of online PEMs on endoscopic sinus surgery and found that over 95% of the material assessed was written above the sixth grade reading level [39]. Even PEMs from the American Rhinologic Society were written between ninth grade and graduate reading level [26]. This is an important finding as some may assume that material originating from credible sources such as those from academic institutions and professional organizations may be better for patient education. This study found that PEMs originating from academic institutions were significantly easier to read than those originating from private clinics and miscellaneous sources. It has been suggested that these differences in readability may be due to the fact that academic institutions may benefit from their affiliations with libraries and other multidisciplinary professionals which in turn improves the delivery of health information [39]. Although academic institutions may produce more patient-friendly materials, our study still found they were above the sixth grade recommended reading level for PEMs. Some suggestions for improving the readability of PEMs include minimizing the use of complex words, decreasing the number of words per sentence and syllables per word, using numbering or bullet points, and writing in an active voice [20, 26, 27]. The authors have created an example patient brochure which adheres to the readability standards discussed in this paper with approximately a sixth grade reading level (Supplementary Material 1).

In addition to readability, PEMs must also contain reliable, comprehensive, and evidence-based information in order to be useful for patient education. In order to assess these aspects, this study evaluated all PEMs with the DISCERN tool. Forty-two percent (36/85) of PEMs had total DISCERN scores corresponding to “poor” or “very poor” quality. Seymour et al. demonstrated similar results when evaluating the quality of web-based patient information on cochlear implantation which revealed that 63% of websites scored as “poor” or “very poor” quality based on total DISCERN scores [19]. In addition to overall scores, looking at the subdomains and individual questions within the DISCERN tool, can highlight more specific deficiencies in the PEMs. Interestingly, the mean total reliability score (based on Questions 1–8) was 18.4 out of a maximum score of 40 whereas the mean total quality score (based on Questions 9–15) was 24.4 out of a maximum score of 35. The differences are likely attributable to questions 1, 4, and 5 with mean scores of 2.0, 1.5, and 1.8, respectively. Question 1 assesses whether the PEM had clear aims. This study found the majority of PEMs on septoplasty did not define who and what they were intended for. A good quality publication has clear aims that help the reader judge whether or not a resource is likely to contain information they are looking for and for who it would be most useful. Furthermore, the majority of PEMs included in this study did not report the evidence used to compile the information (question 4) nor did they include an indication of how current the information was (question 5). Similar findings were demonstrated by Bojrab et al. when they evaluated online information on Ménière’s disease and found DISCERN scores of 1.85 and 2.18 for questions 4 and 5, respectively [38]. Ensuring that authors of PEMs provide clear bibliographies and include revision dates would improve the reliability of these resources. Interestingly, this study also found academic institutions and medical information websites had significantly higher reliability scores when compared to private clinics. These differences may be due to the fact that academic institutions and medical information websites have access to a number of experts in their respective fields and may have more resources to produce more robust PEMs. These findings may have important implications when physicians refer patients to online resources to learn more about septoplasty [40]. The brochure created by the authors (Supplementary Material 1) provides an example of a PEM that was designed to adhere to the quality standards outlined by the DISCERN instrument and score 4 or higher on each component of the DISCERN instrument.

This study has several limitations. Firstly, the search strategy in this study used the Google search engine with eight different search terms to appropriately simulate how patients search the internet for health information. It is possible patients could obtain different resources through using other search engines (i.e Yahoo), however, Google is the most common search engine used and has been the sole search engine used in a multitude of other readability analyses [24, 27, 38, 41]. Furthermore, it is not possible to predict which search terms patients will use, however, this study utilized eight different search terms, which were thought to cover the most likely terms to be used by patients. This study focused on information available online and written in English. Many other potentially useful sources of patient information such as videos, information written in other languages, or patient information booklets provided to patients in the clinic were not evaluated. The correlation between readability scores and true reader comprehension cannot be considered to be perfect as readability scores have several limitations. Since these scores are based on variables such as number of syllables or characters per word, they can be skewed by medical terminology like “turbinectomy” or “reconstruction”. They also do not take into account shorter words that are difficult to understand like “septum”. Additionally, a gold standard readability test does not exist for readability assessment of PEMs, however, this study employed multiple readability tests used in previous literature to provide a comprehensive assessment of readability and minimize the overall impact of factors that can skew the scores. Lastly, although the DISCERN tool has been validated and widely applied to patient information on treatment options, it does not directly evaluate the accuracy of the information contained within these PEMs. This is certainly an area that deserves further study, as it has been demonstrated that online information on septoplasty contained on average 42% of the information patients should know prior to undergoing surgery [42].

Conclusion

This study assessed online PEMs on septoplasty and demonstrated that the majority of PEMs are written at a level above the recommended reading level. Furthermore, this study revealed some deficiencies in both the quality and reliability of internet-based PEMs on septoplasty. The shortcomings of online PEMs should be emphasized to both patients and providers to ensure adequate and appropriate patient education.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AMA:

American Medical Association

PEMs:

Patient education materials

OHNS:

Otolaryngology-head & neck surgery

FRE:

Flesch Reading Ease

FKG:

Flesch Kincaid Grade Level

GFI:

Gunning-Fog Index

CLI:

Coleman-Liau Index

SMOG:

Simple Measure of Gobbledygook Index

ARI:

Automated Readability Index

References

  1. Stewart MG, Smith TL, Weaver EM, et al. Outcomes after nasal septoplasty: results from the nasal obstruction Septoplasty effectiveness (NOSE) study. Otolaryngol Head Neck Surg. 2004;130:283–90.

    Article  Google Scholar 

  2. van Egmond MMHT, Rovers MM, Hannink G, Hendriks CTM, van Heerbeek N. Septoplasty with or without concurrent turbinate surgery versus non-surgical management for nasal obstruction in adults with a deviated septum: a pragmatic, randomised controlled trial. Lancet. 2019;394:314–21.

    Article  Google Scholar 

  3. Manoukian PD, Robert Wyatt J, Leopold DA, Bass EB. Recent trends in utilization of procedures in otolaryngology-head and neck surgery. Laryngoscope. 1997;107:472–7.

    Article  CAS  Google Scholar 

  4. Wormald P-J. Endoscopic sinus surgery: anatomy, three-dimensional reconstruction, and surgical technique. New York: Thieme Publishers; 2011.

    Google Scholar 

  5. Gillman GS, Egloff AM, Rivera-Serrano CM. Revision septoplasty: a prospective disease-specific outcome study. Laryngoscope. 2014;124:1290–5.

    Article  Google Scholar 

  6. Quinn JG, Bonaparte JP, Kilty SJ. Postoperative management in the prevention of complications after septoplasty: a systematic review. Laryngoscope. 2013 Jun;123(6):1328–33.

    Article  Google Scholar 

  7. Bloom JD, Kaplan SE, Bleier BS, Goldstein SA. Septoplasty complications: avoidance and management. Otolaryngol Clin N Am. 2009 Jun;42(3):463–81.

    Article  Google Scholar 

  8. Tassone P, Georgalas C, Patel NN, Appleby E, Kotecha B. Do otolaryngology out-patients use the internet prior to attending their appointment? J Laryngol Otol. 2004;118:34–8.

    Article  Google Scholar 

  9. The Daily, Monday, May 10, 2010. Canadian Internet Use Survey (May 2010). https://www150.statcan.gc.ca/n1/daily-quotidien/100510/dq100510a-eng.htm. Accessed 8 April 2020.

  10. Hoffman-Goetz L, Donelle L, Ahmed R. Health literacy in Canada: a primer for students: Canadian Scholars’ Press; 2014.

    Google Scholar 

  11. Institute of Medicine (US) Committee on Health Literacy. Health Literacy: A Prescription to End Confusion. Washington (DC): National Academies Press (US); 2014.

    Google Scholar 

  12. Weiss BD, Schwartzberg JG, Davis TC, Parker RM, Sokol PE, Williams MV. Health literacy and patient safety: help patients understand: manual for clinicians. Chicago: American Medical Association; 2007.

    Google Scholar 

  13. Ferster APO, O’Connell Ferster AP, Hu A. Evaluating the quality and readability of internet information sources regarding the treatment of swallowing disorders. Ear Nose Throat J. 2017;96:128–38.

    Article  Google Scholar 

  14. Ting K, Hu A. Evaluating the quality and readability of Thyroplasty information on the internet. J Voice. 2014;28:378–81.

    Article  Google Scholar 

  15. Kong KA, Hu A. Readability assessment of online tracheostomy care resources. Otolaryngol Head Neck Surg. 2015;152:272–8.

    Article  Google Scholar 

  16. Richard A, Richard J, Johnston W, Miyasaki J. Readability of advance directive documentation in Canada: a cross-sectional study. CMAJ Open. 2018;6:E406–11.

    Article  Google Scholar 

  17. Chi E, Jabbour N, Aaronson NL. Quality and readability of websites for patient information on tonsillectomy and sleep apnea. Int J Pediatr Otorhinolaryngol. 2017;98:1–3.

    Article  Google Scholar 

  18. Yi GS, Hu A. Quality and readability of online information on in-office vocal fold injections. Ann Otol Rhinol Laryngol. 2020;129:294–300.

    Article  Google Scholar 

  19. Seymour N, Lakhani R, Hartley B, Cochrane L, Jephson C. Cochlear implantation: an assessment of quality and readability of web-based information aimed at patients. Cochlear Implants Int. 2015;16:321–5.

    Article  Google Scholar 

  20. Wong K, Gilad A, Cohen MB, Kirke DN, Jalisi SM. Patient education materials assessment tool for laryngectomy health information. Head Neck. 2017;39:2256–63.

    Article  Google Scholar 

  21. Balakrishnan V, Chandy Z, Hseih A, Bui T-L, Verma SP. Readability and understandability of online vocal cord paralysis materials. Otolaryngol Head Neck Surg. 2016;154:460–4.

    Article  Google Scholar 

  22. Best J, Muzaffar J, Mitchell-Innes A. Quality of information available via the internet for patients with head and neck cancer: are we improving? Eur Arch Otorhinolaryngol. 2015;272:3499–505.

    Article  Google Scholar 

  23. Santos PJF, Daar DA, Paydar KZ, Wirth GA. Readability of online materials for Rhinoplasty. World J Plast Surg. 2018;7:89–96.

    PubMed  PubMed Central  Google Scholar 

  24. Alexa - Top Sites by Category: Computers/Internet/Searching. http://www.alexa.com/topsites/category/ Computers/Internet/Searching. Accessed 2 May 2020.

  25. Behmer Hansen R, Gold J, Lad M, Gupta R, Ganapa S, Mammis A. Health literacy among neurosurgery and other surgical subspecialties: readability of online patient materials found with Google. Clin Neurol Neurosurg. 2020 Oct;197:106141.

    Article  Google Scholar 

  26. Kasabwala K, Misra P, Hansberry DR, et al. Readability assessment of the American Rhinologic society patient education materials. Int Forum Allergy Rhinol. 2013;3:325–33.

    Article  Google Scholar 

  27. Misra P, Agarwal N, Kasabwala K, Hansberry DR, Setzen M, Eloy JA. Readability analysis of healthcare-oriented education resources from the American Academy of facial plastic and reconstructive surgery. Laryngoscope. 2013;123:90–6.

    Article  Google Scholar 

  28. Patel CR, Sanghvi S, Cherla DV, Baredes S, Eloy JA. Readability assessment of internet-based patient education materials related to parathyroid surgery. Ann Otol Rhinol Laryngol. 2015;124:523–7.

    Article  Google Scholar 

  29. Readability formulas – Readable. Readable. https://readable.com/features/readability-formulas/. Accessed 2 May 2020.

  30. Kasabwala K, Agarwal N, Hansberry DR, Baredes S, Eloy JA. Readability assessment of patient education materials from the American Academy of otolaryngology—head and neck Surgery Foundation. Otolaryngol Head Neck Surg. 2012;147:466–71.

    Article  Google Scholar 

  31. Sax L, Razak A, Shetty K, Cohen M, Levi J. Readability of online patient education materials for parents after a failed newborn hearing screen. Int J Pediatr Otorhinolaryngol. 2019;125:168–74.

    Article  Google Scholar 

  32. Flesch R. A new readability yardstick. J Appl Psychol. 1948;32:221–33.

    Article  CAS  Google Scholar 

  33. Charnock D. The DISCERN handbook: quality criteria for consumer health information on treatment choices. Abingdon: Radcliffe Medical Press; 1998.

    Google Scholar 

  34. San Giorgi MRM, de Groot OSD, Dikkers FG. Quality and readability assessment of websites related to recurrent respiratory papillomatosis. Laryngoscope. 2017;127:2293–7.

    Article  Google Scholar 

  35. Winterton RIS, Alaani A, Loke D, Bem C. Role of information leaflets in improving the practice of informed consent for patients undergoing septoplasty. J Laryngol Otol. 2007;121:134–7.

    Article  CAS  Google Scholar 

  36. Moulton LS, Evans PA, Starks I, Smith T. Pre-operative education prior to elective hip arthroplasty surgery improves postoperative outcome. Int Orthop. 2015;39:1483–6.

    Article  Google Scholar 

  37. Fu MR, Chen CM, Haber J, Guth AA, Axelrod D. The effect of providing information about lymphedema on the cognitive and symptom outcomes of breast cancer survivors. Ann Surg Oncol. 2010;17:1847–53.

    Article  Google Scholar 

  38. Bojrab DI, Fritz C, Babu S, Lin KF. A critical analysis of the information available online for Ménière’s disease. Otolaryngol Head Neck Surg. 2020;162:329–36.

    Article  Google Scholar 

  39. Cherla DV, Sanghvi S, Choudhry OJ, Liu JK, Eloy JA. Readability assessment of internet-based patient education materials related to endoscopic sinus surgery. Laryngoscope. 2012;122:1649–54.

    Article  Google Scholar 

  40. Biggs TC, Jayakody N, Best K, King EV. Quality of online otolaryngology health information. J Laryngol Otol. 2018;132:560–3.

    Article  CAS  Google Scholar 

  41. Harris VC, Links AR, Hong P, et al. Consulting Dr. Google: quality of online resources about Tympanostomy tube placement. Laryngoscope. 2018;128:496–501.

    Article  Google Scholar 

  42. Kulasegarah J, McGregor K, Mahadevan M. Quality of information on the internet—has a decade made a difference? Ir J Med Sci. 2018;187:873–6.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

EG and CH were involved in the conception and design, acquired, analyzed, and interpreted the data, and was a major contributor in writing the manuscript. KA was involved in acquisition of data and interpretation of data, as well as contributing to manuscript writing. VW was involved in analyzing and interpreting the data, as well as contributing to manuscript writing. JML was involved in the conception and design, acquired and interpreted the data, and was a major contributor in reviewing the manuscript. All authors read and approved the final manuscript. All authors agree to be accountable for all aspects of the work.

Corresponding author

Correspondence to Elysia M. Grose.

Ethics declarations

Ethics approval

Ethics approval was waived as this project did report on or involve the use of any human participants, human data or human tissue.

Consent for publication

Not applicable.

Competing interests

JML has received research grants and honoraria from Baxter corporation. All other authors declare no potential conflicts of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grose, E.M., Holmes, C.P., Aravinthan, K.A. et al. Readability and quality assessment of internet-based patient education materials related to nasal septoplasty. J of Otolaryngol - Head & Neck Surg 50, 16 (2021). https://doi.org/10.1186/s40463-021-00507-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40463-021-00507-z

Keywords