Skip to main content
  • Original research article
  • Open access
  • Published:

Development and validation of an administrative data algorithm to identify adults who have endoscopic sinus surgery for chronic rhinosinusitis



This was a diagnostic accuracy study to develop an algorithm based on administrative database codes that identifies patients with Chronic Rhinosinusitis (CRS) who have endoscopic sinus surgery (ESS).


From January 1st, 2011 to December 31st, 2012, a chart review was performed for all hospital-identified ESS surgical encounters. The reference standard was developed as follows: cases were assigned to encounters in which ESS was performed for Otolaryngologist-diagnosed CRS; all other chart review encounters, and all other hospital surgical encounters during the timeframe were controls. Algorithm development was based on International Classification of Diseases, version 10 (ICD-10) diagnostic codes and Canadian Classification of Health Interventions (CCI) procedural codes. Internal model validation was performed with a similar chart review for all model-identified cases and 200 randomly selected controls during the following year.


During the study period, 347 cases and 185,007 controls were identified. The predictive model assigned cases to all encounters that contained at least one CRS ICD-10 diagnostic code and at least one ESS CCI procedural code. Compared to the reference standard, the algorithm was very accurate: sensitivity 96.0% (95%CI 93.2–97.7), specificity 100% (95% CI 99.9–100), and positive predictive value 95.4% (95%CI 92.5–97.3). Internal validation using chart review for the following year revealed similar accuracy: sensitivity 98.9% (95%CI 95.8–99.8), specificity 97.1% (95%CI 93.4–98.8), and positive predictive value 96.9% (95%CI 93.0–99.8).


A simple model based on administrative database codes accurately identified ESS-CRS encounters. This model can be used in population-based cohorts to study longitudinal outcomes for the ESS-CRS population.


Chronic Rhinosinusitis (CRS) is a common and debilitating inflammatory disease of the sinonasal cavities. CRS is associated with significant resource utilization and burden on health care expenditures [1]. The prevalence of CRS has been quoted as between 5 and 15% of the population [1, 2], and appears to be rising [3]. Patients with CRS self-report their overall health status at a level similar to those with other chronic diseases including current or previous cancer, asthma, migraine, arthritis and epilepsy [4].

Much of the epidemiological data that forms our understanding of CRS is based on studies that identify CRS within large administrative databases and health surveys. We recently published a systematic review of studies that determined the accuracy of these methods to identify CRS [5], and found three studies that compared CRS identification (ascertained from diagnostic codes and self-reporting) to a reference standard (including clinician-performed chart review, nasal endoscopy, and Otolaryngologist-based CRS clinical diagnosis), with moderate to good accuracy.

Health administrative (HA) data may provide the best research modality to develop reliable population-based statistics for CRS patients. HA data have great potential to answer important research questions because of their low cost (since the data are already collected), wide external validity (since the data can cover all people within a particular health care system), and large numbers of patients to provide statistical power [6].

Administrative databases are not built for research purposes, and HA research “creates risks that can make them uninterpretable or bias their results” [7]. Within HA data, diseases and procedures are represented with codes. The validity of using HA data to answer research questions is dependent on the accuracy of these codes for the entity they are supposed to represent. Inaccuracies via coding errors that occur in defining the initial cohort, the exposure, or the outcome in an administrative data project can result in biased conclusions. Despite the importance of establishing the accuracy of administrative database codes, such validation is performed in less than 20% of administrative database studies [8]. One of the core (and arguably most important) requirements for using ADs for research involves validation of the codes that serve as proxies of a defined population [9, 10].

Our objective was to identify a model that that would accurately identify CRS patients within HA data. The single physician diagnostic CRS code “473.x” (Version 9 of the International Classification of Disease (ICD-9)) is one such model for identifying CRS cases. However, the aforementioned systematic review identified one study in which this code had just a 34% positive predictive value (PPV). A more feasible solution to meet our objective of capturing a CRS cohort within HA data was to examine CRS patients who had ESS. CRS patients who fail medical therapy are potential surgical candidates, and this subgroup therefore represents patients with medically refractory CRS [11].

We first created a chart review-based reference standard cohort of patients who had ESS for CRS, and then derived a model based on health administrative data to identify this population within a surgical cohort. The final objective was a model that, when applied to all surgical encounters, accurately identified ESS-CRS encounters.


This was a validation study of diagnostic test accuracy using several measures of accuracy including sensitivity, specificity and predictive values. To achieve current standards in performing studies of diagnostic accuracy, we adhered to the Standards for Reporting of Diagnostic Accuracy Studies (STARD, 2015 version [12], Appendix). This study received institutional research ethics board approval (OHSN-REB 20140164).

Databases used

The Ottawa Hospital Data Warehouse (OHDW) contains data from several source systems of patient data dating back as far as 1996 for patients treated at the Ottawa Hospital (TOH), a 1000-bed tertiary care hospital serving over 1.2 million patients and affiliated with the University of Ottawa. Several groups of variables are recorded for each patient encounter including unique identifiers, patient demographics, encounter type, diagnoses, and services rendered (including surgeries). We used the surgery dataset, an online computerized charting and scheduling system for all operations that occur at TOH back to April 2008, having several checklists in place to ensure the correct surgery for the correct indication is recorded (such as the surgeon completing and submitting the paperwork for the surgery, the actual procedure(s) that was (were) performed during the operation), all of which is confirmed by the surgeon at the end of the case.

While the OHDW contains data for TOH patients, the Institute for Clinical Evaluative Sciences (ICES) maintains administrative data for over 13 million people covered by the publically funded health plan. Patients treated at TOH can be identified and linked through both databases with unique identifiers.

Identifying patients undergoing ESS for CRS at TOH

We obtained a cohort of all TOH surgical encounters that were recorded as Otolaryngologist-performed ESS procedures between January 1st, 2011 and December 31st, 2012 for patients ≥18 years old. The encounters selected for the chart review were identified as follows: because ESS is only performed by Otolaryngologists, we first identified all surgical encounters performed by this type of surgeon. We then selected all encounters that listed ESS as at least a minor component of the surgery performed during that encounter.

The extracted cohort therefore included all ESS surgeries performed by TOH Otolaryngologists, meaning that all other surgeries conducted at TOH during this time period (all by non Otolaryngologists) were not ESS.

Chart review: determine whether ESS was conducted for CRS

A chart review was performed of all Otolaryngologist-performed ESS cases to identify those in which ESS was the predominant surgery performed (as opposed to other procedures such as open sinus approaches), and in which ESS was performed for Otolaryngologist-diagnosed CRS (as opposed to other indications such as benign tumours, cerebrospinal fluid leaks, encephaloceles, trauma, foreign bodies, and invasive fungal sinusitis) [13,14,15]. The chart review was performed by a single author (KM), and involved analysis of primary care physician referrals, clinic notes, operative notes, and sinus CT imaging. We used Otolaryngologist-diagnosed CRS as opposed to a retrospective chart review to identify symptoms and objective findings meeting CRS diagnostic criteria [11], because the latter approach would more likely result in incomplete data collection and misclassification. If the listed diagnosis was recurrent sinusitis, a more detailed chart review was performed to determine if the patient had coexisting CRS. This included clinic notes, preoperative imaging, and prior OR reports. If the patient had associated CRS, the encounter was labeled as a case, otherwise a control.

Patient encounters in which the chart review confirmed ESS for CRS were categorized as cases. All other encounters were categorized as controls.

Linkage to population-based datasets at ICES

This dataset was linked to ICES via unique identifiers that were encrypted to maintain patient confidentiality. This linked dataset with assigned ESS-CRS cases and controls then provided the reference standard from which the predictive model was created.

Derivation and internal validation of model to identify ESS for CRS encounters

The same clinician (KM) who performed the chart review created the model. Model development was based on an a priori identification of codes that could differentiate cases and controls. Table 1 lists the ICD-10 (International classification of diseases, version 10 [16]) diagnostic codes for CRS and CCI (Canadian Classification of Health Interventions, version 2015 [17]) codes for ESS that were identified from this process.

Table 1 Administrative database codes used in predictive model

Model variations were developed in a trial and error approach. We considered several variable types for model inclusion, including hospital length of stay (as most ESS is day surgery), age and major comorbidities (because ESS for CRS is usually an elective surgery that may be performed in younger and healthier people compared to other major surgeries), and the CCI and ICD-10 codes listed in Table 1. Our aim was to develop a simple model that used as few codes and variable types as possible, but that made clinical sense. We theorized that each ESS-CRS surgical encounter should contain some variation of ICD-10 CRS and CCI ESS codes, and so we determined to use at least these two variable types in our model. The model was built and adjusted based on comparing the accuracy of model case ascertainment to the reference standard.

The final model accuracy was displayed in a 2x2 table comparing the case status of the model output to the reference standard. Validation statistics with 95% confidence intervals (95% CI) were calculated, using SAS version 9.3 for UNIX (SAS Institute, Inc., USA).

Internal validation was then performed to determine model accuracy within another TOH cohort from a different time-period. Using model criteria, all TOH patient encounters identified by the model as cases and 200 randomly selected controls between Jan. 1st, 2013 and Dec. 31st, 2013 were retrieved. A chart review was performed of the approximate 400 encounters to determine reference standard case status, by the same clinician, (KM) blinded to the model-predicted case status. Once the chart review was completed, model case status was revealed, and another 2x2 table and set of validation statistics were created to determine internal validation of the model.


Chart review

From Jan. 1st 2011, to Dec. 31st, 2012, 411 TOH surgical encounters were identified as having ESS (Fig. 1). Of these, 17 were excluded after the chart review revealed that the major surgery was one other than ESS, leaving 394 encounters that included at least endoscopic antrostomy and ethmoidectomy. Another 37 encounters were excluded because the procedures were for diagnoses other than chronic sinusitis, including 18 sinonasal tumours and 8 with recurrent sinusitis with no evidence of associated CRS. The OR report (with the surgery and indication) for the specified surgical encounter was sufficient to establish case status in all but the 8 patients with recurrent sinusitis. For these 8 patients, no satisfactory evidence of associated CRS could be determined from the chart review. This resulted in 357 ESS-CRS cases during the study period.

Fig. 1
figure 1

Flow chart of chronic rhinosinusitis - endoscopic sinus surgery chart review. Chart review was performed for TOH surgical encounters in which ESS was performed during the defined time period. ESS = endoscopic sinus surgery; CRS = chronic rhinosinusitis; TOH = The Ottawa Hospital

Linkage of chart review data to ICES dataset

Patient encounters within the TOH chart review cohort and ICES dataset were linked via encrypted unique identifiers. Thirteen patients (ten cases and three controls) were lost in the linkage due to missing unique identifiers. The linked dataset contained 185,354 hospital encounters representing all surgeries performed at TOH from Jan. 1st, 2011, to Dec. 31st, 2012. This linked dataset, with 347 cases and 185,007 controls, (case prevalence = 0.19) was used to develop the predictive model.

Model development

The model was created through a trial and error approach, using variables within the linked dataset. It was evident from analyzing the variable types and values that each case encounter contained commonly assigned CRS diagnostic and ESS procedural codes. The first model assigned cases if an encounter listed any of the ICD-10 CRS diagnostic codes listed in Table 1. Compared to the reference standard case ascertainment, this model had excellent validation statistics: sensitivity 96.5% (95% CI 93.9–98.1) and positive predictive value (PPV) 93.3% (95% CI 90.1–95.6).

The second model was based on procedural ESS codes only. Cases were assigned if an encounter listed any one of the CCI ESS procedural codes listed in Table 1. Compared to the reference standard case ascertainment, this model had similarly high validation statistics: sensitivity 96.8% (95%CI 94.2–98.3) and PPV 93.3% (95%CI 90.1–95.6).

The third and final model combined features from the first two models, resulting in a slightly improved PPV. Encounters were classified by the final model as ESS for CRS if they had been coded with any of the ICD-10 CRS diagnostic codes listed in Table 1 along with any of the CCI ESS surgical codes listed in Table 1. All encounters not meeting these criteria were classified as controls (i.e. not ESS for CRS). Table 2 compares validation statistics of the three model variations. Specificity for all three models was 100%.

Table 2 Comparison of validation statistics of three models to predict CRS-ESS encounters

Table 3 displays a 2x2 table comparing the final model output to the reference standard, with validation statistics including sensitivity 96.0% (95%CI 93.2–97.7), specificity 100% (95%CI 99.9–100), positive predictive value 95.4% (95%CI 92.5–97.3), positive likelihood ratio 11,096 (95%CI 6,794–18,120), and negative likelihood ratio 0.04 (95%CI 0.02–0.07). Fig. 2 displays a graphical overview of the final model.

Table 3 Predictive model vs reference standard for ESS-CRS status
Fig. 2
figure 2

Overview of final model to identify CRS-ESS case encounters within a surgical cohort. Case was assigned if an encounter contained one of the ICD-10 diagnostic CRS codes, AND one of the CCI procedural ESS codes. CRS = Chronic Rhinosinusitis; ESS = Endoscopic sinus surgery; ICD-10 = International Classification of Diseases, version 10; CCI = Canadian Classification of Health Interventions

Further examination of the 16 false positives (encounters identified as cases by the model but were controls by the reference standard), revealed that eight were patients with recurrent sinusitis according to the reference standard.

Internal validation of model

Using criteria from the final model, we retrieved a hospital cohort of all cases and 200 randomly selected controls from year following the derivation cohort, Jan. 1st, 2013 to Dec. 31st, 2013. A chart review, blinded to model output case status, was then performed to determine reference standard case status. The OR report for the selected surgical encounter was sufficient to determine case status in all encounters. After the model output case status was revealed, a 2x2 table was again created with excellent accuracy: sensitivity 98.9% (95%CI 95.8–99.8) specificity 97.1% (95%CI 93.4–98.8), positive predictive value 96.9% (95%CI 93.0–98.7) positive likelihood ratio 33.6 (95%CI 15.3–74.0), and negative likelihood ratio of 0.01 (95%CI 0.00–0.04). (Table 4)

Table 4 Internal validation of predictive model of ESS-CRS status


We developed an internally validated model that accurately identified patient encounters in which endoscopic sinus surgery was performed for chronic rhinosinusitis at the Ottawa Hospital over a 3-year period. This model is simple and includes readily available administrative data to accurately differentiate between ESS-CRS cases and controls within a surgical cohort. The criteria for a case (at least one ICD-10 CRS diagnostic code and at least one CCI ESS procedural code) were not created a priori, but instead through a trial and error process with the observations and variables contained within the dataset, with knowledge of the chart review data. However, we argue that this model has face validity for Otolaryngologic epidemiology research.

Despite the importance of validation studies for AD codes, the lack of code validation is hardly unique to CRS: a 2011 review of a random Medline sample of 115 AD research studies found that only 14 (12.1%) “measured or referenced the association of the code with the entity is supposedly represented”, and “of five studies reporting code sensitivity and specificity, the estimated probability of code-related condition in code-positive patients was less than 50% in two” [8]. Therefore, “people with a code frequently do not have the condition it represents”. Applying this to our population, it is incorrect to assume ESS and CRS codes are accurate without measuring the ability of a code to differentiate between a case and a control, with an acceptable reference standard.

Validation studies like this one are essential for future AD research using specific codes. As one example, a recent publication used a similar chart review method used to validate National Surgical Quality Improvement Program 30-day readmission codes [18].

Conducting such health administrative database research in Ontario is aided by the fact that ESS is a publically funded procedure. As a result, all ESS performed in Ontario should be captured within these databases. Advantages of this population-based method to identify patients undergoing ESS for CRS include: 1) minimal cost, as most work for this research is at the computer and through a chart review; 2) large numbers of patients from a population-based database allow complete analyses without sampling; and 3) if externally validated, this model can be used to study longitudinal outcomes for ESS as an intervention in CRS patients.

Others have identified ESS procedures (for all indications, not just CRS) in HA databases using similar procedural codes for ESS (similar CCI codes in Alberta [19], and Common Procedural Terminology codes in the US [20]). In these studies, the authors did not attempt to determine code accuracy to determine if patients identified by these methods actually had ESS. Our chart review revealed that 17/411 (4%) patients who were identified as having ESS actually had a more invasive open procedure, and 37/394 (9.4%) patients who had at least endoscopic antrostomy/ethmoidectomy did not have CRS. Combined, 54/411 (13.1%) patients who were coded at TOH as having ESS did not truly have ESS or CRS. However, despite these potential inadequacies in code accuracy, a model based only on ESS codes achieved almost the same accuracy in identifying ESS-CRS cases as our final model (sensitivity 96.8% (95%CI 94.2–98.3), specificity 100% (95%CI 99.9–100), PPV 93.3% (95%CI 90.1–95.6)), giving validity to previous authors’ work. In an analysis similar to ours (although again without an attempt at code validation), Benninnger et al. identified ESS-CRS patients within a cohort of 35.5 million patients enrolled in the Market Scan Commercial Claims and Encounter database in 2010 [21]. They used analogous codes: sinus surgery codes (CPT-4 31254-31288 [Common Procedure Terminology, 4th Ed]), and ICD-9 CRS diagnostic codes (473.X), and identified 2,833 ESS-CRS patients. Our results provide argument that these methods of identifying ESS procedures may be accurate – this statement would be further supported if our model was externally validated or if other authors carried out similar validation projects.

We found that eight of the false positives identified by the final model were encounters in which ESS was performed for recurrent sinusitis with no coexisting CRS. This misclassification reflects a potential inability of administrative database codes to differentiate between these two conditions. Although this did not greatly affect our validation statistics, it could affect external validity, for example in centres where a greater proportion of ESS is performed for recurrent sinusitis.

Several assumptions must be made that could be interpreted as study weaknesses. First, development of the reference standard, predictive model, and internal validation were all performed by the one clinician. This bias could influence case ascertainment in the reference standard and internal validation, as well as variable selection for the final model, falsely elevating the model accuracy. Second, we used Otolaryngologist-diagnosed CRS for reference standard case ascertainment. This infers that the Otolaryngologist correctly diagnosed CRS. It is possible that strict diagnostic criteria were not applied. We considered establishing a guideline-based CRS diagnosis as the reference standard through a retrospective chart review of the patient charts (including clinic notes and imaging) but this would have been exposed to recall and selection bias. Third, we must also assume that patient encounters are correctly recorded in the surgical database, and specifically that ESS encounters were correctly identified for the chart review.

Future direction

Our future direction includes external validation at other tertiary care centres, similar to the methods used in internal validation. An externally validated model can then be used to study longitudinal outcomes and health services research of this population. Other centres may be encouraged to perform their own external validation based on our model criteria, with the overarching objective of producing much needed accurate CRS epidemiological data.


A simple model based on administrative database codes accurately identified surgical encounters in which endoscopic sinus surgery was performed for chronic rhinosinusitis (CRS) at a tertiary care centre. Compared to a reference standard including a chart review and Otolaryngologist-diagnosed CRS, this model achieved excellent validation statistics: sensitivity 96.0% (95%CI 93.2–97.7), specificity 100%, and positive predictive value 95.4% (95%CI 92.5–97.3). Internal validation was achieved with similarly high validation statistics.

This model has potential for large population-based cohorts to study longitudinal outcomes of patients who have endoscopic sinus surgery for chronic rhinosinusitis.


  1. Ray NF, Baraniuk JN, Thamer M, et al. Healthcare expenditures for sinusitis in 1996: Contributions of asthma, rhinitis, and other airway disorders. J Allergy Clin Immunol. 1999;103(3):408–14.

    Article  CAS  PubMed  Google Scholar 

  2. Chen Y, Dales R, Lin M. The epidemiology of chronic rhinosinusitis in Canadians. Laryngoscope. 2003;113(7):1199–205.

    Article  PubMed  Google Scholar 

  3. Kilty S. Canadian guidelines for rhinosinusitis: practical tools for the busy clinician. BMC Ear Nose Throat Disord. 2012;12:1.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Macdonald KI, McNally JD, Massoud E. The health and resource utilization of Canadians with chronic rhinosinusitis. Laryngoscope. 2009;119(1):184–9.

    Article  PubMed  Google Scholar 

  5. Macdonald KI, Kilty SJ, van Walraven C. Chronic rhinosinusitis identification in administrative databases and health surveys: A systematic review. Laryngoscope 2015; Dec 9; epub ahead of print doi:10.1002/lary.25804.

  6. McIsaac DI, Gershon A, Wijeysundera D, et al. Identifying Obstructive Sleep Apnea in Administrative Data: A Study of Diagnostic Accuracy. Anesthesiology. 2015;123:253–63.

    Article  PubMed  Google Scholar 

  7. van Walraven C, Austin P. Administrative database research has unique characteristics that can risk biased results. J Clin Epidemiol. 2012;65:126–31.

    Article  PubMed  Google Scholar 

  8. van Walraven C, Bennett C, Forster AJ. Administrative database research infrequently used validated diagnostic or procedural codes. J Clin Epidemiol. 2011;64:1054–9.

    Article  PubMed  Google Scholar 

  9. Pisesky A, Benchimol EI, Wong CA et al. Incidence of Hospitalization for Respiratory Syncytial Virus Infection amongst Children in Ontario, Canada: A Population-Based Study Using Validated Health Administrative Data. PLoS ONE 2016 Mar 9;11(3):e0150416. doi: 10.1371/journal.pone.0150416.

  10. Benchimol EI, Manual DG, To T, et al. Development and use of reporting guidelines for assessing the quality of validation studies of health administrative data. J Clin Epi. 2011;64(8):821–29.

    Article  Google Scholar 

  11. Desrosiers M, Evans GA, Keith PK, et al. Canadian clinical practice guidelines for acute and chronic rhinosinusitis. J Otolaryngol Head Neck Surg. 2011;40 Suppl 2:S99–193.

    PubMed  Google Scholar 

  12. Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies. Radiology. 2015;277:826–32.

    Article  PubMed  Google Scholar 

  13. Harvey RJ, Parmar P, Sacks R, et al. Endoscopic skull base reconstruction of large dural defects: A Systematic Review of Published Evidence. Laryngoscope. 2010;122:452–9.

    Article  Google Scholar 

  14. Gotlib T, Krzeski A, Held-Ziółkowska M, et al. Endoscopic transnasal management of inverted papilloma involving frontal sinuses. Videosurg Miniinv. 2010;4:299–303.

    Article  Google Scholar 

  15. Woodworth BA, Bhargave GA, Palmer JN, et al. Clinical outcomes of endoscopic and endoscopic-assisted resection of inverted papillomas: A 15-year experience. Am J Rhinol. 2007;21:591–600.

    Article  PubMed  Google Scholar 

  16. World Health Organization. The ICD-10 classification of mental and behavioural disorders: clinical descriptions and diagnostic guidelines. Geneva: World Health Organization; 1992.

    Google Scholar 

  17. Canadian Institute for Health Information. Canadian Classification of Health Interventions, Version 2015. Ottawa, ON: CIHI; 2015.

    Google Scholar 

  18. Sellers MM, Merkow RP, Halverson et al. Validation of new readmission data in the American College of Surgeons National Surgical Quality Improvement Program. J Am Coll Surg 2013; 216, 420–427.

  19. Rudmik L, Holy CE, Smith TL. Geographic variation of endoscopic sinus surgery in the United States. Laryngoscope. 2015;125:1772–8.

    Article  PubMed  Google Scholar 

  20. Psaltis AJ, Soler ZM, Nguyen SA, et al. Changing trends in sinus and septal surgery, 2007 to 2009. Int Forum Allergy Rhinol. 2012;2:357–36.

    Article  PubMed  Google Scholar 

  21. Benninger MS, Sindwani R, Holy CE, et al. Early versus Delayed Endoscopic Sinus Surgery in Patients with Chronic Rhinosinusitis: Impact on Health Care Utilization. Otolaryngol - Head Neck Surg. 2015;152:546–52.

    Article  PubMed  Google Scholar 

Download references


The Ottawa Hospital Academic Medical Organization supported this project. The full study protocol can be obtained by contacting the lead author at


The Ottawa Health Sciences Network Research Ethics Board approved this project (OHSN-REB 20140164). They had no access to data or any steps relating to manuscript preparation.

Availability of data and materials

The data that support the findings of this study are available from the Institute for Clinical Evaluative Sciences (ICES) but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of ICES.

Authors’ contributions

KM was the lead author, and was responsible for the study design, research-ethics board application, Institute for Clinical Evaluative Sciences (ICES) application, chart review, model development and internal validation, statistical analysis, and writing and preparation of the thesis. CvW was the last author, and provided intellectual expertise, guidance, and thoroughly reviewed and revised all content. SK was the second reviewer and reviewed and provided feedback. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Not applicable.

Ethics approval and consent to participate

The Ottawa Hospital Research Ethics Board approved this study.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Kristian I. Macdonald.



Table 5 Standards for reporting diagnostic accuracy studies checklist [12] (2015 version)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Macdonald, K.I., Kilty, S.J. & van Walraven, C. Development and validation of an administrative data algorithm to identify adults who have endoscopic sinus surgery for chronic rhinosinusitis. J of Otolaryngol - Head & Neck Surg 46, 38 (2017).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: