Trends and Outcomes of Proton Radiation Therapy Use for Non–Small Cell Lung Cancer

Purpose: To examine national care patterns in proton radiation therapy (PBT) use for non–small cell lung cancer (NSCLC) and the effect of facility type on survival. Patients and Methods: Using the National Cancer Database, we identified 506 patients with a diagnosis of NSCLC from 2004-2014 who underwent PBT. Patients were categorized as having received treatment at an academic/research facility (ARF) or a form of community cancer program (CCP). Descriptive analysis was performed, and overall survival was analyzed by Kaplan-Meier methods and Cox proportional hazard models. Results: Treatments at ARFs and CCPs were equally distributed with 253 patients at each facility type. A positive trend in PBT use over time was observed with 2.8% of cases being treated in 2008 compared to 21.5% in 2014 (P = .001). Definitive doses (≥60 Gy) were more commonly given at ARFs than CCPs (72% versus 45%, respectively; P < .001). Five-year overall survival was 31% at ARFs and 18% at CCPs (P < .001). On multivariate analysis, outcomes were worse with treatments at CCPs (hazard ratio [HR] 1.61; 95% Confidence Interval, 1.14-2.27; P = .007). On subanalysis of nonsurgical patients treated with ≥60 Gy, facility type became insignificant and dose escalation was associated with improved outcomes (≥70 Gy HR 0.45; 95% CI, 0.25-0.81; P = .008). Conclusion: Use of PBT for management of NSCLC is on the rise. Community cancer programs were associated with higher rates of nondefinitive PBT doses and correspondingly worse outcomes. Differences in survival by facility became insignificant when definitive doses were used, warranting further investigation of practice patterns in CCPs at a national level.


Introduction
Lung cancer remains an aggressive oncologic disease affecting over 220,000 people annually and causing over 25% of all cancer-related deaths in 2017 [1]. Surgical management or definitive radiation therapy (RT) is considered standard of care for earlystage non-small cell lung cancer (NSCLC), while multimodality therapy is preferred for locally advanced disease [2,3]. In the realm of RT, investigations on the effects of various RT modalities on thoracic malignancies have already led to changes in clinical practice patterns. For example, the utilization of ablative radiation doses through stereotactic body radiation therapy (SBRT) is now considered an acceptable treatment strategy for medically inoperable, stage I NSCLC after randomized clinical trials and retrospective studies demonstrated that SBRT yielded high local control rates and comparable survival outcomes relative to surgery [4,5]. Photon-based intensitymodulated radiation therapy (IMRT) and volumetric modulated arc therapy have also been shown to provide improved dose sparing of critical structures when compared to conventional 3-dimensional conformal RT.
The application of proton beam therapy (PBT) for treatment of cancer, a concept originally described by physicist Robert Wilson in 1946, has gained traction given the unique physical properties of protons. Compared to photons, which continually deposit dose throughout tissue and exhibit an exit dose, the dose distribution of protons forms a Bragg peak, denoting maximal dose deposition at a finite tissue depth followed by a sharp dose falloff with virtually no exit dose. This dosimetric difference, in turn, can theoretically translate into a reduction of treatment-related morbidity and a beneficial impact on overall survival (OS).
Currently, limited data exist on long-term outcomes of PBT for cancer care. While we await the results of Radiation Therapy Oncology Group (RTOG) 1308, a phase III randomized trial comparing the effects of PBT and photon-based chemoradiotherapy on OS of patients with inoperable stage II-IIIB NSCLC, a recent publication using the National Cancer Database (NCDB) suggests a statistically significantly higher 5-year OS of 22% versus 16% with PBT versus non-PBT, respectively, for patients with stage I-IV NSCLC [6]. The latter retrospective study provided a logistic regression model to determine predictors for receipt of PBT; however, a detailed analysis of the PBT cohort was not provided. Therefore, the objective of our study was to examine the practice patterns of PBT utilization in the United States for the management of stage I-IV NSCLC, with a particular focus on type of treating facility, as this feature has been shown to affect selected management and survival of patients with various cancer types [7][8][9].

Materials and Methods
Established in 1989 by the American Cancer Society and the Commission on Cancer (CoC) of the American College of Surgeons, the NCDB is a comprehensive, nationwide clinical surveillance registry with de-identified oncologic data acquired annually from over 1500 CoC-approved centers. Available data include patient demographics, socioeconomic status, tumor characteristics, initial course of therapy, and OS in addition to RT specifics, thereby making the NCDB a valuable resource for patterns of care analyses [10].
Patients with a diagnosis of clinical stages I-IV NSCLC from 2004 to 2014 were identified. We restricted our study to patients documented as having received PBT to the lungs and/or chest, and stratified our cohort by type of treating facility ( Figure 1). Academic/research facilities (ARFs), by definition, are comprehensive cancer centers that treat 500 or more cancer cases annually and participate in postgraduate medical education in at least 4 programs including internal medicine and general surgery. In contrast, community cancer programs (CCPs) must manage 100 to 499 cancer cases annually and are not required to have training programs. Comprehensive CCPs have a similar range of treatment volume and services as ARFs; however, like CCPs, training resident physicians is optional. For the purposes of this study, CCPs (N ¼ 26), comprehensive CCPs (N ¼ 200), and integrated network cancer programs (which require no minimum caseload or resident training, N ¼ 27) were included in the generic ''CCP'' grouping stratification. Additional reasons for exclusion included unknown clinical stage or facility type, and PBT directed to other sites than the lung and/or chest wall. The timing of RT and surgery was examined and categorized as ''no surgery'' for presumed definitive RT intent, ''preoperative RT,'' and ''postoperative RT.'' Regional doses were then compared among the various RT-surgery sequences and treating facilities.
Baseline patient sociodemographic, clinical, and facility characteristics were compared between ARFs and CCPs by using Pearson X 2 tests. The primary outcome of interest was OS, defined as the time from diagnosis to last contact or death. Overall survival time for surviving patients was censored at the time of last contact, and the 2-year and 5-year OS by clinical stage and facility type were estimated by using the Kaplan-Meier method. A Cox proportional hazards model was used to study the effects of several factors including facility type on survival, expressed as hazard ratios (HRs). Age, chemotherapy use, and all variables significant on univariable analysis were included in the multivariable analysis (Supplemental Table 1). All statistical analyses were performed by using SAS v9.4 software (Cary, North Carolina) and a 2-sided P value ,.05 was considered statistically significant.

Patient and Clinical Characteristics
The median age was 70 years (range, 42-89 years) and 53% of patients were male. Nearly half of patients were diagnosed with stage III disease (47%), while 25% had stage I, 13% stage II, and 16% stage IV disease. Patients were generally healthy without comorbidities (66%), lived in a zip code with the highest median income quartile $46,000 (47%), and had Medicare (69%). Interestingly, several sociodemographic and clinical features such as gender, median household income, type of insurance, clinical stage, and tumor location were comparable among ARFs and CCPs. However, significant discrepancies existed by facility type with regard to race and comorbidity scores, both of which can be associated with outcomes [11]. Approximately 74% of all black individuals were treated at ARFs, totaling 9% of the ARF population in contrast to 3% of the CCP population (P ¼ .004). The ratio of patients with a comorbidity score of 1 and/or 2 was disproportionately higher at ARFs (24% and 16%) compared to CCPs (19% and 8%; P ¼ .004). On average, patients treated at CCPs lived closer to treating institutions, with a mean and median distance to facility of 29.9 miles and 9.1 miles (range, 0.4-1504 miles) compared to 88 miles and 18.6 miles (range, 0.2-2241.5 miles) for patients treated at ARFs (P , .001). Patient and clinical characteristics are shown in Table 1.

Therapy Specifications Including RT Dosing
With regard to multidisciplinary treatment, 438 patients (87%) did not undergo a surgical intervention. For those that did, lobectomies were more commonly performed (n ¼ 38) than pneumonectomies (n ¼ 7), with 68% (n ¼ 26) of lobectomies performed at ARFs (P ¼ .065). Approximately two thirds of all patients (n ¼ 338, 67%) received chemotherapy, a proportion that remained roughly equivalent when stratified by facility type (P ¼ .738). Since total radiation dose delivered can affect survival, we sought to investigate whether differences existed in radiation dosing records between ARFs and CCPs. A higher median dose of 66.6 Gy was delivered at ARFs than the 60-Gy dose at CCPs. According to the National Comprehensive Cancer Network (NCCN), a minimum of 45 Gy in the preoperative setting, 60 Gy with a preferred range of 60 to 70 Gy in the definitive setting (no surgery), and 50 Gy in the postoperative setting are considered standard recommended doses [12]. Compared to national standards, 17% (n ¼ 43) of patients treated at ARFs and 9% (n ¼ 24) of patients treated at CCPs underwent preoperative RT or postoperative RT, most of whom received appropriate radiation doses. However, in the definitive setting, only 43% (n ¼ 108) of patients at CCPs received appropriate doses compared to 66% (n ¼ 168, P , .001) at ARFs (Supplemental Table 2).

Survival Analysis
Median follow-up was 15.2 months for the ARF cohort and 23.5 months for the CCP cohort. When stratifying by clinical stage, the 5-year OS was 36% for patients with stage I disease, 34% for stage II, 23% for stage III, and 5% for stage IV (P , .001) ( Figure 3A). The 2-year and 5-year OS estimates for patients treated at ARFs was nearly double that of the CCP population at 61% and 31% compared to 35% and 18%, respectively (P , .001) ( Figure 3B). On multivariate analysis of the overall cohort, receipt of PBT at a CCP was associated with worse outcomes than at ARFs (hazard ratio [HR] 1.61; 95% CI, 1.14-2.27; P ¼ .007). Other factors that negatively impacted survival included advanced clinical stage III (HR 2.05; 95% CI, 1.29-3.27; P , .003) and stage IV (HR 4.31; 95% CI, 2.62-7.11; P , .001), and a lower versus upper lobe tumor location (HR 1.55; 95% CI, 1.11-2.15; P ¼ .009). Treatment with chemotherapy (HR 0.5; 95% CI, 0.34-0.74; P ¼ .001) or treatments at a facility located in the Atlantic/New England region were associated with improved survival ( Table 2).

Discussion
Although RT has been an integral part of the paradigm of cancer treatment for many decades, the application of PBT in the field has been a rather recent development. Loma Linda University Medical Center (Loma Linda, California) opened the first hospital-based proton center in 1990 and to date, there are 25 operating centers nationwide with an estimated 11 centers under construction [13,14]. The dose-sparing attributes of PBT relative to other RT modalities make it an ideal treatment modality for various diagnoses including pediatric cancers and cancers of the prostate, head and neck, or lung. Indeed our study, which demonstrates an absolute annual increase in the number of patients with NSCLC being treated with PBT, correlates with the positive global trend [15,16]. Apart from physical access, type of insurance and finances can often affect the access patients have to proton therapy. In our study, 69% of patients had Medicare and 23% were privately insured. However, those underrepresented were patients who were uninsured/self-pay (1%) or on Medicaid (4%). In a similar manner, only 23% (n ¼ 120) of patients pertained to the lowest 2 median household income quartiles, compared to 71% (n ¼ 361) of patients making $35,000 or more per year. Therefore, efforts should be directed towards addressing these socioeconomic disparities, particularly if future NSCLC clinical trials demonstrate a survival benefit with PBT over current photon-based treatments [17].
Our study provides survival outcomes by hospital type, which has been recommended as a useful quality metric of cancer care [18]. Type of treating facility has been shown to affect long-term outcomes of patients with various cancer types. For example, for patients with oral cavity cancer, Rubin et al [9] found that being treated at an ARF was associated with a higher likelihood of undergoing surgical intervention and improved outcomes than treatment at a CCP (HR 0.95; 95% CI, 0.91-0.98; P , .01). Furthermore, Dholaria et al [19] noted that treatment at ARFs was associated with higher 4-year OS than at CCPs (29% versus 22%; P , .001) for all patients with NSCLC. In the present study, we found that in a generally well-balanced cohort, with the exception of comorbidity score and ethnicity, which favored the CCP population, 5-year OS was significantly higher at ARFs than CCPs (31% versus 18%; P , .001). On multivariate analysis, treatment at a CCP remained significantly associated with worse outcomes (HR 1.61; 95% CI, 1.14-2.27; P ¼ .007). Another potential difference in patient care by facility type that may influence long-term outcomes is the available salvage therapies offered after recurrences are detected. We were unable to investigate this in our current study; therefore, additional studies are required to assess modern practice patterns in the salvage setting and how they vary in the community compared to academic institutions.
Hospital volumes-outcome relationships have been well documented for surgical interventions [20][21][22]; however, we chose to analyze the cohort by facility type instead as a surrogate for multiple known and unknown factors that can influence  Abbreviations: RT, radiation therapy; HR, hazard ratio; CI, confidence intervals; ARF, academic/research facility; CCP, cancer community program. treatment selection. Both ARFs and CCPs must report a caseload of 500 or more cancer cases annually; however, only ARFs are required to participate in postgraduate medical education programs and are likely to also have additional resources. Using the Surveillance, Epidemiology and End Results (SEER)-Medicare database, Charlton et al [7] discovered that institutions with National Cancer Institute (NCI) designation or affiliation with residency programs and/or major medical schools were more likely to provide guideline-concordant care to patients with stage II/III rectal cancer than institutions lacking these qualifications. Similarly, the use of recommended neoadjuvant chemotherapy for select patients with breast cancer has been shown to occur less frequently at CCPs than ARFs [23], raising awareness of notable practice pattern variations. One pertinent distinction in treatments between facilities was that CCPs more frequently delivered lower total radiation doses than ARFs. Not only was the median dose lower at 60 Gy compared to 66.6 Gy, but the ratio of patients receiving an underdose at ,60 Gy in the definitive RT setting was 2.5 times greater at CCPs (32% versus 13%). Rationalization for this observation can be multifactorial, including early termination of treatment due to treatment-related toxicities, patient noncompliance, lack of resources for management of severe toxicities, or a general practice trend towards offering preoperative doses with the expectation of surgical intervention thereafter. Errors in reported radiation doses are also plausible; however, this is less likely as the NCDB has strict registry documentation procedures similar to the SEER database to audit and correct data if needed, and it is unlikely that CCPs across the nation are consistently having similar documentation errors relative to ARFs.
Radiation dose is known to affect local regional control and survival. When taking definitive dosing into account in our subanalysis, the discrepancy in outcomes between ARFs and CCPs became insignificant, indicating that RT dose remains a predictor of survival and that dose delivery variations among facility types warrant further investigation at an institutional and national level. Interestingly as with other studies [24][25][26], we observed a survival benefit with doses 70 Gy (HR 0.45; 95% CI, 0.25-0.81; P ¼ .001). NCCN guidelines currently recommend definitive RT doses between 60 to 70 Gy with caution of doses up to 74 Gy given the findings of RTOG 0617, which found no difference in survival between 60 Gy and 74 Gy for patients with inoperable stage III NSCLC [3,27]. However, this trial allowed for only 3-dimensional conformal RT or IMRT, both of which are photon-based modalities and may lead to higher doses to thoracic structures such as the heart, esophagus, and spinal cord in comparison to PBT. Therefore, additional studies are needed to determine whether higher radiation doses delivered through PBT can yield improved survival without an increase in the treatment toxicity profile.
While facility type became insignificant in our multivariate subanalysis, the location of the treating facility remained a significant predictor of survival, with patients treated in the Atlantic/New England region having improved outcomes as compared to patients who received their care in Central and Pacific/Mountain locations. Part of the discrepancy can be associated with a higher proportion of ARFs to CCPs located in the Atlantic/New England region (71% versus 29%) compared to the Central (15% versus 85%) and Pacific/Mountain (62% versus 38%) locations. However, even after accounting for type of treating facility, these findings with respect to facility location suggest that regional practice patterns may have a larger impact on survival than facility type. Differences in primary management of patients with NSCLC by geographic location should therefore be further investigated to determine potential socioeconomic, demographic, and clinicopathologic factors associated with the observed differences in treatments and outcomes.
The question of whether PBT offers a survival benefit over photon-based NSCLC treatments is currently being investigated by the phase III randomized trial RTOG 1308. In the interim, a recent publication using the NCDB compared outcomes of 243,474 patients with NSCLC of all clinical stages to those of 348 patients who received photon-based RT (non-PBT) or PBT, respectively [6]. In their analysis, the authors reported a significant 5-year OS difference of 22% with PBT compared to 16% with non-PBT. However, when they stratified photon-based treatments by modality, there was no survival improvement with PBT relative to IMRT (HR 1.05; 95% CI, 0.91-1.20; P ¼ .524). Radiation dose ranges were also captured in their study, but the authors did not further elucidate on reasons why the dose range was broad (0.01-1395.6 Gy) and whether accounting for definitive doses 60 Gy would affect the OS benefit seen with PBT over general photon-based RT. The present study, which analyzes a similar PBT cohort from 2004-2014, differs from the previous study as it reflects a more thorough analysis on PBT and demonstrates the impact of facility type on outcomes for patients with NSCLC. Moreover after noticing a survival difference between ARFs and CCPs, we were able to identify RT dose variations by facility type as a probable cause.
Several limitations exist in the current study, given its retrospective nature. We are unable to account for the extent of primary disease that was targeted with PBT or what quality assurance measures were in place. A detailed analysis regarding dose variations (such as hypofractionated or stereotactic body RT regimens) by facility type and clinical stage was attempted but not reported given significant variations in available fractionation data. Therefore, total delivered RT dose with a 60-Gy cutoff for definitive dosing was used, since most patients had stage II or III disease (60%) and would likely require conventional dosing over more aggressive dose per fractionation schemes owing to larger treatment field sizes. Further specifications regarding PBT, such as passive versus active scanning delivery methods, are unavailable in the NCDB and therefore could not be analyzed. Information regarding the type(s) of chemotherapy used or the number of cycles given is also unavailable in the NCDB. Moreover, compared to other NCDB studies analyzing outcomes using photon-based treatments for NSCLC, the size of the overall PBT cohort is relatively small at 506 patients. However, this is the largest study to date and having pooled from over 70% of the cancer population in the United States, it provides relevant findings on PBT practice patterns and associated outcomes as compared to what could be achieved at a single or multi-institutional level.

Conclusion
In summary, the application of PBT for management of NSCLC is becoming more common over time. For unknown reasons, patients receiving treatment at ARFs were more likely to receive guideline-concordant care with respect to appropriate radiation doses and had better OS than patients treated at CCPs. When examining nonsurgical patients treated with recommended RT doses of 60 Gy, the effect of facility type on survival became insignificant, warranting an evaluation of practice patterns and outcomes at an institutional level, particularly in community center programs. Lastly, given the dosesparing advantages of protons over photons, clinical trials should examine the role of dose escalation using PBT for NSCLC to determine if a dose-outcomes relationship with limited toxicity exists.