Fidelity monitoring across the seven studies in the Consortium of Hospitals Advancing Research on Tobacco (CHART)

Background This paper describes fidelity monitoring (treatment differentiation, training, delivery, receipt and enactment) across the seven National Institutes of Health-supported Consortium of Hospitals Advancing Research on Tobacco (CHART) studies. The objectives of the study were to describe approaches to monitoring fidelity including treatment differentiation (lack of crossover), provider training, provider delivery of treatment, patient receipt of treatment, and patient enactment (behavior) and provide examples of application of these principles. Methods Conducted between 2010 and 2014 and collectively enrolling over 9500 inpatient cigarette smokers, the CHART studies tested different smoking cessation interventions (counseling, medications, and follow-up calls) shown to be efficacious in Cochrane Collaborative Reviews. The CHART studies compared their unique treatment arm(s) to usual care, used common core measures at baseline and 6-month follow-up, but varied in their approaches to monitoring the fidelity with which the interventions were implemented. Results Treatment differentiation strategies included the use of a quasi-experimental design and monitoring of both the intervention and control group. Almost all of the studies had extensive training for personnel and used a checklist to monitor the intervention components, but the items on these checklists varied widely and were based on unique aspects of the interventions, US Public Health Service and Joint Commission smoking cessation standards, or counselor rapport. Delivery of medications ranged from 31 to 100 % across the studies, with higher levels from studies that gave away free medications and lower levels from studies that sought to obtain prescriptions for the patient in real world systems. Treatment delivery was highest among those studies that used automated (interactive voice response and website) systems, but this did not automatically translate into treatment receipt and enactment. Some studies measured treatment enactment in two ways (e.g., counselor or automated system report versus patient report) showing concurrence or discordance between the two measures. Conclusion While fidelity monitoring can be challenging especially in dissemination trials, the seven CHART studies used a variety of methods to enhance fidelity with consideration for feasibility and sustainability. Trial registration Dissemination of Tobacco Tactics for hospitalized smokers. Clinical Trials Registration No. NCT01309217. Smoking cessation in hospitalized smokers. Clinical Trials Registration No. NCT01289275. Using “warm handoffs” to link hospitalized smokers with tobacco treatment after discharge: study protocol of a randomized controlled trial. Clinical Trials Registration No. NCT01305928. Web-based smoking cessation intervention that transitions from inpatient to outpatient. Clinical Trials Registration No. NCT01277250. Effectiveness of smoking-cessation interventions for urban hospital patients. Clinical Trials Registration No. NCT01363245. Comparative effectiveness of post-discharge interventions for hospitalized smokers. Clinical Trials Registration No. NCT01177176. Health and economic effects from linking bedside and outpatient tobacco cessation services for hospitalized smokers in two large hospitals. Clinical Trials Registration No. NCT01236079.

Results: Treatment differentiation strategies included the use of a quasi-experimental design and monitoring of both the intervention and control group. Almost all of the studies had extensive training for personnel and used a checklist to monitor the intervention components, but the items on these checklists varied widely and were based on unique aspects of the interventions, US Public Health Service and Joint Commission smoking cessation standards, or counselor rapport. Delivery of medications ranged from 31 to 100 % across the studies, with higher levels from studies that gave away free medications and lower levels from studies that sought to obtain prescriptions for the patient in real world systems. Treatment delivery was highest among those studies that used automated (interactive voice response and website) systems, but this did not automatically translate into treatment receipt and enactment. Some studies measured treatment enactment in two ways (e.g., counselor or automated system report versus patient report) showing concurrence or discordance between the two measures. Conclusion: While fidelity monitoring can be challenging especially in dissemination trials, the seven CHART studies used a variety of methods to enhance fidelity with consideration for feasibility and sustainability.
(Continued on next page)

Introduction
Treatment fidelity is the extent to which an intervention is implemented as designed [1]. The rationale for monitoring fidelity is to address the methodological strategies used to monitor and enhance reliability and validity of the intervention [2]. Moreover, fidelity speaks to the degree of integrity of an intervention. Ensuring treatment fidelity gives researchers more confidence in their results [2] and increases credibility [3], particularly in trials conducted in community settings where there are diverse settings, therapists, and patients [4,5].
Lack of fidelity refers to any gap between the intervention as it was planned and the intervention as it was actually implemented [6]. If a treatment was not given completely as designed and statistical effects of the treatment are non-significant, then the treatment may be discarded prematurely. Indeed, higher treatment fidelity has been linked to improved outcomes [6]. If significant effects were found, but fidelity was not monitored, then it is unclear whether the difference was due to the intervention itself or to something else [2,7]. Fidelity measures can be monitored and included in statistical analysis (e.g., the number of times an individual subject was reached on the phone to assess the implementation of a phone intervention) and therefore increase knowledge about how the components of an intervention influence the outcome. Finally, the assessment of treatment fidelity serves to identify barriers to implementation that need to be addressed and modified in order to ensure that the intervention has a greater chance of sustainability [8].
The five main components of fidelity monitoring are well established and include study design, training, delivery, receipt, and enactment [2,9,10]. Study design monitoring includes consideration on whether or not the intervention is sufficiently differentiated from the control group (avoidance of cross-contamination). Standardized trainings of providers/interventionists support standardized treatment delivery. In addition, monitoring and maintaining provider skills over time is a necessity. Treatment delivery measures whether or not the treatment was delivered as intended. Treatment receipt measures whether or not the participant understands the intervention and indicates confidence and ability to apply the skills learned in the intervention. Treatment enactment measures whether or not the participant participated in the intervention. While many studies have been published on the importance of fidelity monitoring and some have published results from their own studies, no papers that we know of have published approaches to fidelity monitoring across several smoking cessation studies. Hence, this paper will describe the methods of fidelity monitoring and provide fidelity data across seven studies that were funded by the National Institutes of Health (NIH) to disseminate unique smoking cessation interventions to hospitalized smokers. Since fidelity influences both the degree to which changes can be attributed to the intervention (internal validity), and the ability to replicate and disseminate the intervention (external validity) [11], the assessment of treatment fidelity across the seven studies may be useful to researchers conducting smoking cessation and other behavioral intervention trials [7,[9][10][11][12][13].

Methods
Collectively called the Consortium of Hospitals Advancing Research on Tobacco (CHART), these studies conducted between 2010 and 2014 recruited over 9500 inpatient cigarette smokers to test the effectiveness of these unique smoking cessation interventions. The protocols of these seven studies have been published [14][15][16][17][18][19][20][21] and outcome papers will follow. All of the CHART studies implemented the components of smoking cessation interventions shown to be efficacious in Cochrane Collaborative Reviews [22] including medication and behavioral counseling and follow-up calls. Although usual care varied across the seven studies, all of the CHART studies compared their unique treatment arm(s) to usual care and used common core measures at the baseline and 6-month follow-up. However, each study was unique in how fidelity was monitored. A brief overview of each study design follows.

University of Michigan Medical Center (UMMC)
The Tobacco Tactics study used a quasi-experimental design in five Michigan Trinity Health System Hospitals. Nurses in three experimental hospitals were taught to conduct the in-hospital, face-to-face Tobacco Tactics intervention in a 1-h educational session using the Tobacco Tactics toolkit. Nurses in the other two hospitals continued usual care (brief advice and brochure). Nurses and inpatient smokers (N = 1528) regardless of motivation to quit in both the experimental and control sites were surveyed pre-and post-implementation of the intervention [15].

University of California, San Diego (UCSD)
In this 2 by 2 factorial randomized controlled trial (RCT), 1270 inpatient smokers motivated to quit from three healthcare systems (a total of 5 hospitals) in San Diego and Davis, CA were randomized to one of four conditions: 1) usual care; 2) nicotine patches provided at the time of discharge; 3) telephone counseling after discharge and; 4) nicotine patches at discharge plus telephone counseling after discharge. Six standard counseling sessions were proactively provided by the state quitline post discharge. Subjects in the patch conditions received 8 weeks of nicotine patches at the time of discharge [16].

University of Kansas (KU)
This RCT compared warm-handoff versus fax referral (usual care) for linking hospitalized smokers motivated to quit to state quitline services (N = 1054) from two large hospitals. Counselors provided brief cessation advice to all patients and provided them with a standard smoking cessation booklet. In the warm hand-off intervention condition, patients received an abbreviated bedside intervention and the counselor called the quit line and transferred the call to the patient's bedside hospital phone or mobile phone before leaving. In the control condition, fax-referral patients received the usual care cessation counseling and were fax referred to the quitline. Outcome data collected from the state quit line provider, Alere Well Being (AWB), included enrollment and number of calls completed out of a total of 5 intervention calls [21].

New York University (NYU)
This RCT compared two methods of smoking cessation delivered to all inpatient smokers regardless of motivation to quit discharged from two urban public hospitals in New York City (Bellevue and the Manhattan VA): 1) seven sessions of proactive telephone counseling delivered by study staff; or 2) referral to the New York State Quitline via fax or online referral (n = 805). Counselors in the intervention arm assisted patients (n = 805) in obtaining four weeks of NRT patch or gum after discharge. Counselors in the Quitline arm assisted patients (n = 814) in obtaining NRT following usual Quitline protocols (N = 1619). For most participants this was two weeks of NRT patch or gum [17].

University of Alabama at Birmingham (UAB)
This RCT randomized all smokers regardless of motivation to quit in a large university hospital who had an email address and access to the internet to: 1) usual care; or 2) the web-based Decide2Quit intervention. The intervention included a visit by a counselor who introduced them to the web-site, assisted them with registering to the site, and provided a booklet that gave an overview of the web-site, including information to help them log into the site on their own (N = 1488). If the participant was discharged prior to the bedside registration, a booklet was sent to their home address and a counselor walked them through website registration and orientation over the phone. Questions completed during the intervention registration informed tailored email messages automatically sent on a scheduled basis. The website allowed participants to send messages to a counselor and receive responses. Participants were called between 10 and 30 days post hospitalization to encourage use of the web-site [18].

Kaiser Permanente Center for Health Research (KPCHR)
The Inpatient Technology-Supported Assisted Referral (ITSAR) study was a RCT recruiting smokers motivated to quit in three large hospitals serving the Portland OR metropolitan area that compared: 1) adding an assisted referral (AR) to available outpatient quit services and medications following discharge and four post-discharge Interactive Voice Response (IVR) telephone follow-up calls (AR + IVR); and 2) usual care bedside cessation counseling, medication, and information about available quit services. IVR calls captured smoking status, cessation program enrollment status, medication use, and provided brief supportive messages. Nine hundred participants were recruited (599 to AR + IVR and 301 to usual care) using a 2:1 recruitment strategy [19].

Massachusetts General Hospital (MGH)
The RCT Hospital-Initiated Assistance for Nicotine Dependence 2 (Helping HAND 2) is a multisite iteration of Helping HAND 1 [23] and is enrolling 1350 adult smokers motivated to quit admitted to 3 acute care hospitals in Massachusetts and Pennsylvania. Data are included on 529 participants who were enrolled at the time of this writing. All subjects received brief inhospital smoking intervention and were randomly assigned at discharge to either usual care (referral to the Massachusetts or Pennsylvania state quitline) or extended care intervention, which consisted of a 3-month program with 2 components: 1) free medication (30-day supply of FDA-approved medication including nicotine replacement, bupropion, or varenicline) given at hospital discharge and refillable free for a total of 90 days to facilitate medication use and adherence; and 2) Tel ASK (participating IVR company) triage to telephone counseling from a national quitline provider, Alere Well Being (AWI) [20]. Table 1 summarizes how each of the five components of fidelity were addressed and monitored by each of the seven CHART studies. Selected results of the fidelity data from each site follows. Figure 1 and Tables 2, 3 Figure 1 shows the results for treatment delivery assessed using the nurse interviews conducted with 11 % (n = 140) of nurses in the intervention sites. Nurses reported high delivery rates on most items, except for showing the DVD. Qualitative comments from the nurses indicated that the overhead television system was often not working or when it was working, it was cumbersome to use. One nurse who did not attend the training reported delivering the intervention as she learned it from the other nurses.

University of California, San Diego (UCSD)
A chart audit showed that treatment differentiation was high in that no patients in the control or counseling only group were given nicotine patches at discharge. Examination of the quitline data revealed that 10 subjects (<1 %) in the control or patch only group proactively called the quitline after hospital discharge and received counseling. Table 2 shows an example of monitoring the treatment delivery comparing the rates for those in the patch only condition to those in the counseling plus patch condition. Overall 55.1 % received their patches upon discharge, 34.5 % were mailed patches after discharge, 1.7 % refused, and 8.6 % were unknown. There was greater difficulty delivering patches to subjects in the combined counseling plus patch condition than the patches only condition (58.0 % versus 67.5 %; p < .05). Table 3 shows fidelity data for treatment delivery assessed using 108 direct observations of counselors in both the intervention (n = 57) and control (n = 51) arms. The mean fidelity was 94 % for both the treatment and control arms. In terms of treatment enactment, 99.6 % of participants assigned to warm hand-off intervention enrolled, while only 59.6 % of participants assigned to fax referral control arm enrolled (p < .001). The rates of completed calls were as follows: 31 % did not complete any calls, 25 % completed 1 call, 16 % completed 2 calls, 12 % completed 3 calls, 9 % completed 4 calls, and 7 % completed 5 calls. Table 4 shows treatment delivery for smoking cessation medications; 32 % of intervention participants were interested in receiving NRT, 31 % were delivered NRT, and 8 % received a NRT prescription on hospital discharge. In terms of treatment enactment, 52 % of participants randomized to the intervention arm (n = 805) completed at least one counseling call and only 14 % received all seven counseling calls. Participants completed an average of two counseling calls (the average was four calls among participants who began counseling). The average length of the first call was 22 min and follow-up calls averaged 13 min. Eighty-one participants set a quit date with their counselor (10 % of all intervention participants; 20 % of those who began counseling) and 218 participants reported using NRT to a counselor (27 % of all intervention participants; 52 % of those who began counseling). Similarly, 2-month follow-up surveys showed that 48 % of intervention participants (384/805) reported receiving telephone counseling and 34 % (271/ 805) reported using NRT.

University of Alabama at Birmingham (UAB)
Treatment enactment was monitored using websystem tracking of logins, the number of emails automatically sent, and the number of emails sent to and responded by the tobacco counselors as well as by patient recall of web participation from a subgroup of participants (see Table 5). Of the 748 assigned to the Packaged both nurse training and patient intervention into a toolkit.
Pre-post-intervention nurse surveys in intervention and control sites.
Increased chances of sustainability as all nurses in intervention sites were trained.
Research nurse trained trainers until they demonstrated fidelity of training.
Nurse interviews in intervention sites only.
EMR download medications and counseling.
EMR download from nurse documentation.
UCSD Randomization was completed using iPads and tablets so recruiters had timely access to randomized condition, minimizing data entry errors and cross-contamination of intervention.
An operations manual guided research staff on the various components of the study.
Reports were generated from the comprehensive UCSD database (that combined both research-specific data with intervention data) to ensure protocol adherence (e.g., proper number of attempts for counseling clients, nicotine patches distributed to clients, materials mailed, etc.).
Treatment receipt was assessed by monitoring of quitline counseling database, hospital documentation of patch delivery, and self-report at follow-up regarding receipt and use of nicotine patches and/or quitline services or other tobacco treatment.
Counseling adherence data was collected from the quitline documenting the number of calls participants completed.
Standardized training that included role-play, was provided to recruitment staff (i.e., respiratory therapists and dedicated research recruiters) and to quitline staff responsible for providing the counseling intervention.
Also see Table 2. Two-and six-month evaluation Calls contained self-reported use of quitting aids (including those from the study) and use of behavioral treatment (including quitline).
The project manager went into the field quarterly, or when new employees were hired, observed, and provided feedback to staff to ensure adherence to project protocols.
The counseling used a structured protocol.
Bi-weekly meetings allowed quitline counselors to discuss specific counseling cases and review skills, increasing fidelity.
Timing, length, and frequency of counseling calls was recorded.
KU Quitline Alere Well Being (AWB) provided reports on how the patient had been referred to the quitline-fax or warm handoff. A master's degree level certified tobacco treatment specialist trained hospital tobacco use counselors.
Trainers observed counselors' delivery of counseling and counselor's documentation of the intervention.
Treatment receipt was assessed by hospital treatment counselor documentation and self-report at follow-up as to whether participants received quit line services or other tobacco treatment.
Counseling adherence data was collected from the quitline documenting the number of calls participants completed. Differentiation was assessed by comparing records of group assignment to the group file in which AWB reported data back to the research team.
Hospital counseling training included didactics, role-playing, and supervised delivery of the intervention.
Fidelity monitors observed a 10 % convenience sample of the tobacco use treatment sessions delivered in the hospital.
During the training process, the fidelity monitor observed new A checklist was used to assess provision of each component of Hospital tobacco treatment counselors used a counseling checklist to document assessment, smoking cessation medication usage and recommendations, and referrals to the quitline. Also see Table 3.
Fidelity monitors assessed how well counselors documented in the medical record the treatment that was provided.

NYU
The processes of transferring participant data to the Quitline (control) and providing multi-session telephone counseling (intervention) were accomplished by different study team members, limiting chance of cross-over.
Standardized operating procedure (SOP) manuals were developed for all study procedures.
Intervention counselors completed standardized documentation of their counseling sessions using a study database with close-ended and open-ended fields to document the number and duration of counseling calls, correctness of contact information, overall success in reaching participants, NRT orders, and topics covered.
Treatment receipt (i.e., patient understanding of the intervention and confidence) was not systematically assessed.
Enactment was assessed by counselor documentation and 2-month patient follow-up surveys.
Differentiation was assessed by comparing participant group assignments with Quitline and intervention counselor documentation.
Intervention counselors underwent 20-30 h of initial training on the counseling protocol.
Also see Table 4. 2-month follow-up surveys assessed patient satisfaction with treatment.
Also see Table 4.
Training included didactic lectures, role-plays and practice with standardized patients (actors trained to portray real patients).
Each month a random sample of intervention counseling sessions were audiotaped and reviewed by the study's clinical supervisor using a standardized form assessing adherence to the protocol and counseling approach.
Intervention counselors participated in weekly supervision with the study's clinical supervisor providing an opportunity for training updates (e.g., review of the protocol and counseling approaches).
The study supervisor met with counselors individually to review the form and provide feedback. UAB Treatment differentiation occurred by limiting access of the web-site to those randomized to the intervention through registration to the website by an intervention staff member.
Hospital staff attended at least one two-hour training session.
Once registered, participants received automated emails.
The website tracked messages sent to and from the Tobacco Treatment Counselor.
The website tracked participants' web-site log-ins, number of days website accessed, and number of web pages visited.
The website tracked registrations, log-ins and automated email messages and subset of participants were surveyed.
Also see Table 5. Also see Table 5. Counselor completed a tobacco consult checklist for each smoker seen, and a study enrollment checklist for consented and randomized patients.
Counselor documented consult topics discussed with the patient, assisted referral acceptance and referrals, and discharge medication orders.
Utilization of quit resources was documented at 6-month follow-up and electronic medical records where available.
Counselor documented receipt of printed quit information for patients in the control group.
Counselors attended the same training sessions on tobacco-dependence treatment delivery for hospitalized patients.
Counselors were monitored during initial piloting, and participated in ongoing case management discussions among the counselor group.
IVR call attempts, call completions, and responses, were monitored electronically.
Documentation, including IVR completion data, was tracked and reviewed monthly by study staff.
Other staff received appropriate training in use of the study's electronic data management system at each site and in coding rules to complete the forms properly.
Also see Table 6.
A participant screening and tracking protocol was used to identify patients with subsequent hospital admissions to prevent reenrollment.

MGH
Medication was provided exclusively to intervention participants at the time of hospital discharge (verified by study ID).
For smoking cessation medication, study staff were trained by the overall coordinator and site coordinators to obtain medication from the inpatient pharmacy and deliver it to intervention participants' bedside at the time of hospital discharge.
Medication -Study staff delivered 1 month of smoking cessation medication to the patient's bedside.
Study staff electronically tracked medication dispensation via a standard database.
Similarly, only intervention participants were entered into the IVR database to receive calls after hospital discharge (determined by study ID).
Staff were also trained to enter intervention participants' information into the IVR database so that they could be called according to the TelASK (participating IVR company) protocol).
IVR calls were initiated by TelASK with automated telephone calls to smokers.
The study did not formally track treatment receipt.
IVR calls were monitored by TelASK and study staff via secure access to a web-portal. Telephonic behavioral counseling enrollment and calls were monitored by AWI via standard database.
All call activity was recorded electronically via call flow sheets, and a subset of calls was randomly selected by TelASK for internal quality review.
Also see Table 7.
web intervention, 735 (98 %) participants were registered, 73 % at bedside and 25 % over the telephone. Post-hospitalization calls by interventionists encouraging use of the website were completed for 64 % of intervention participants, while another 34 % were left detailed messages encouraging use. The web-system documented at least 1 web-page was visited by 700 (94 %) participants with the mean number of pages accessed being 5.4 (sd = 4.1; range 1-29). Ninety-one participants (11 %) emailed the tobacco counselor two or more times, while another 51 % emailed the counselor at least once. When a subset of participants (22 %; n = 167) were asked how often they read their intervention emails, 27 % of respondents reported always, 19 % often, 24 % half the time, 13 % rarely, 5 % never and 12 % did not respond. Participants were also asked for comments regarding their experience with the intervention, providing some insight into why the website wasn't used as much as planned (e.g., computer problems, not comfortable with using a website, found it confusing) and how the emails were perceived (e.g., boring, interesting, helpful, incessant).

Kaiser Permanente Center for Health Research (KPCHR)
Treatment delivery was assessed using a treatment form to record components of counselor-patient discussion of tobacco use variables and an enrollment checklist to document assisted referral (AR) to  outpatient counseling and medications. Among the 597 study participants who were randomized to the AR + IVR group, nearly all participants received nicotine withdrawal counseling, discussed tobacco use and quit history, were assessed for medication contraindications, discussed available quit services, and were provided printed quit materials (Table 6). Patients randomized to the AR + IVR group were offered a referral to centralized counseling services or a faxed referral to the quitline. Overall, 57 % accepted AR, 43 % accepted discharge medications, and 28 % accepted both. In one hospital, patients were less likely to accept the AR than another hospital, but more likely to accept medications and both counseling and medications (p < .001). Patients' primary care providers were notified 76 % of the time, with nearly all providers notified in one hospital and less than half notified in another. About half of the intervention recipients completed call 1, and completion rates fell for each subsequent call. Let patient know survey is over and now you will be moving into the treatment portion of the intervention. 98 Describe warm hand-off process 79 Explain to patient you will now call the quit line then hand them the phone 100 Explain they will talk to two people: first registration and second they will be transferred to the quit coach 98 Explain the registration person will ask some questions that might seem redundant (like we have already asked) 79 Let them know you will check back in on them after the call 100 Perform call, using appropriate language 100 Leave room, notify patient's nurse or floor staff patient is talking to quit line, discuss medications for withdrawal if appropriate.

97
Fax Referral (N = 51 observations) Let patient know survey is over and now you will be moving into the counseling/ treatment portion of the intervention 98 Conduct assessment of smoking history readiness to quit 96 Briefly touch on benefits and barriers to quitting as appropriate 100 Provide accurate medication information, use elicit, provide, elicit 94 Build plan to quit/stay quit using the booklet (as needed) 94 Ask if patient requests cessation medication script on discharge 98 Summarize key topics, goals, and next steps 94

Description of Fax Referral Process
Someone will call at the number you provided soon after discharge 96 Explain they will talk to registration first and answer questions before talking to a counselor 75

Massachusetts General Hospital (MGH)
Treatment delivery and enactment were both monitored by tracking acceptance and participation in treatment components among 529 intervention participants (see Table 7). In terms of treatment delivery, all 529 participants received one month of free medication at the time of hospital discharge, although only 51 % requested the 2nd month of medication (1st refill) and only 19 % requested the 3rd (2nd refill). All were enrolled into the TelASK IVR database and were called after hospital discharge and almost 9 out of 10 (88 %) accepted at least 1 IVR call. Success of transfer from the IVR call to counseling was also monitored.
In the first two months of the study, 33 % of calls were dropped, in part due to participants hanging up prior to transfer completion. After adjustment that included reduced wait time and a recorded message to remain on the line, the IVR-to-counselor successful transfer rate increased and held steady reflecting a successful transfer in 83 % of patients requesting transfer to AWI. In terms of treatment enactment, about one-third of those who accepted at least one TelASK IVR call enrolled in telephone counseling through AWI, with 20 % completing all 5 outgoing counselor calls.

Study design and training
There was little reported cross-contamination among the study arms. Quasi-experimental studies, such as the UMMC study can prevent intervention drift as they allow for greater separation of groups or treatment differentiation. UCSD and KU had fidelity measures of the usual care arm, which can measure treatment differentiation between arms. Training was universally applied across all of the studies and can be enhanced by developing manuals and/or toolkits for providers and patients as was done in the UMMC and NYU studies. Training enhances standardization across intervention activities [2] and reduces the risk of variation in strength and elements of intervention provided. Training increases competence, self-efficacy, and confidence [24,25].

Delivery/Receipt/ Enactment
Once training is complete, continued monitoring is necessary. Four of the studies (UMMC, KU, and NYU, KPCHR) used a checklist to monitor interventionists, but the items varied widely. The KU checklist (see Table 3) was largely based on specific components of their unique intervention. The UMMC checklist (see Appendix 1) was based on Joint Commission (JC) standards for inpatient smoking cessation interventions [26] and this approach can directly translate into quality assurance. The KPCHR checklist (see Table 6) was designed to capture key elements of the US Public Health Service [27] and JC standards for smoking cessation. The NYU checklist (see Appendix 2) was based largely on the relationship the counselor built with the patient, which is important as differences in warmth and ease, interactional style, and therapist empathy and self-efficacy can represent serious threats to fidelity [4,28].
Delivery of medications ranged from 31 to 100 % across the studies, with higher levels coming from those studies that gave away free medications (UCSD and MGH) and lower levels coming from those studies that sought to obtain prescriptions for the patient in real world systems (NYU). Delivery of counseling was highest among those studies that used automated systems (websites and IVR systems) such as UAB, KPCHR, and MGH, but this did not automatically translate into treatment receipt and enactment. In the KU study, warm hand-off versus fax referral did increase enrollment in the quitline. In the UAB study most participants in the intervention arm accessed the website, but only about one-quarter participated in email communication.
Some studies measured treatment enactment in two ways showing concurrence or discordance between the two measures. In the NYU study, patient reports of participating in phone calls were only slightly lower than those reported by counselors. In the UAB study, the website reported fewer logins than a subset of patients self-reported, perhaps due to social desirability or recall bias, as patients were surveyed 6 months after discharge. Automated systems, such as the IVR, telephone, and website systems used in the KU, UAB, KPCHR, and MGH studies provide easily available fidelity data. Yet, automated data may not identify qualitative issues identified in observations of and surveys/interviews with interventionists and patients as done in the UMMC, KU, NYU, KPCHR, and other similar studies [29]. Unlike data from automated systems, qualitative data can provide rich information about barriers and facilitators to implementation.
In summary, all sites monitored their intervention and study fidelity, in particular how they differentiated between study arms, what training was provided and who engaged in it, what was delivered and to whom/ how much, and to a lesser extent, intervention receipt and enactment by participants. Using these 5categories as a guide, the studies all achieved monitoring of each aspect, to greater and lesser degrees as fit their programs. These examples may help other researchers in developing their own fidelity monitoring plans by identifying similar structural components and the methods for those components described herein.

Challenges of fidelity monitoring
Recording treatment sessions and then having two observers rate fidelity is considered the gold standard [10], but can be very time consuming and costly, especially in dissemination trials [11]. Some might argue that fidelity monitoring fundamentally changes the intervention itself in that the fidelity assurance process serves as a reminder that would otherwise not exist. If the fidelity assessment must be viewed as part of the intervention, the validity of the intervention is challenged when it is conducted without fidelity checks once the research has been concluded and the intervention continues to be usual care.
Rigid application of treatment protocols may impede treatment delivery. The therapists may feel "locked in" or resistant to "cookbook" approach. Fidelity arguably does not allow the provider to tailor the intervention to the patient's needs. For example, treatment fidelity may interfere with sound clinical judgment when shorter sessions or cultural adaptations to a treatment protocol are warranted. It may be a challenge to implement rigid protocols for smoking cessation in busy inpatient settings [3,10,30].
While modifying an intervention presents a challenge to fidelity measurement [11], a realistic and sustainable intervention may need to allow for adaptation and modification. Some studies make adaptability under certain circumstances a requirement [30][31][32]. While some researchers question that adaptations retain efficacy/effectiveness [2,8], fidelity checks can be used to empower the stakeholders to actively participate in the process of adaptation. Finding the optimal balance between fidelity and adaptation requires a treatment design that allows for modifications and also calls for competence among interventionists to make appropriate treatment decisions [33].

Conclusion
Despite the challenges of measuring fidelity, the seven CHART studies used a variety of methods to enhance fidelity with the goal of ensuring the accurate delivery of smoking interventions with consideration for feasibility and sustainability. The examples provided can be used to guide future studies. Some of the strategies to address fidelity may be more adaptable to certain types of trials and matching monitoring methods to study design and program deliverables will facilitate appropriate fidelity tracking. For example, direct observation may work best in RCTs where there are few interventionists, whereas direct observation is more difficult in large implementation trials that could instead use provider surveys. Researchers may determine which of the studies described is similar to their own and consider measuring fidelity in similar ways. The strengths, limitations, potential need for adaptation, and costs of fidelity monitoring need to be taken into consideration when designing and implementing clinical trials. Encourages change talk from patient (e.g., less talking by TTS and more listening). Rule of thumb: Patient should talk 3 times as much as the counselor.
Asks one question at a time.
Corrects assumptions and uses Elicit-Provide-Elicit framework (E.g., "May I tell you a little about the effects of smoking on --?" "Actually, that's what many people think, can I tell you more about that?" Uses empathic listening statements to Affirm patient's experience (reflection: simple, complex as needed) Uses summaries/transition statements when introducing a new topic/idea. E.g., "I understand that you want to quit because of your health and for your children. These are very important reasons. Let's switch to a different, but related topic and discuss some reasons why you like smoking." express our appreciation to the hospital and research staff (too many to name) who participated in the design and implementation of the studies. Lastly we would like to express our appreciation to the patients that enrolled in the studies. This study was conducted within the Consortium of Hospitals Advancing Research on Tobacco (CHART), an affiliation of tobacco control researchers from six non-profit research organizations and the National Institutes of Health with the aims of evaluating the effectiveness and cost-effectiveness of practical interventions to reduce smoking among hospitalized patients. CHART is

Adaptivity
Tailors the content of counseling to patient needs (e.g., active/passive engagement style; information-seeking/ avoidant; psychologically distressed/non-distressed; motivated/unmotivated.) E.g., Mental health/anxious patients may have difficulties with problem-solving and follow-through. These patients may need a more directive approach. Patients who are information-seeking may have lots of questions; TTS is able to answer these but can keep the counseling on course. Meets the patient where he/she is at.
Tailors the content of counseling according to the session flow (e.g., fluidity, flexibility). If a patient has quit, and more time is spent talking about NRT, then spend a little less time on motivation building. Comments: