• Users Online: 382
  • Print this page
  • Email this page


 
 
Table of Contents
MEDICAL EDUCATION
Year : 2022  |  Volume : 8  |  Issue : 2  |  Page : 156-160

Challenges of objective structured clinical examination as a tool in medical assessment


1 Department of Community Medicine and Family Medicine, All India Institute of Medical Sciences, Bhubaneswar, Odisha, India
2 The Brooklyn Hospital Center, New York, USA
3 Department of Community Health, St. John's National Academy of Health Sciences, Bengaluru, Karnataka, India
4 Department of Community Health, St. John's Medical College, Bengaluru, Karnataka, India

Date of Submission05-Feb-2022
Date of Decision06-Oct-2022
Date of Acceptance07-Oct-2022
Date of Web Publication31-Dec-2022

Correspondence Address:
Dr. Priyamadhaba Behera
Department of Community Medicine and Family Medicine, All India Institute of Medical Sciences, Bhubaneswar - 751 019, Odisha
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/ijcfm.ijcfm_12_22

Rights and Permissions
  Abstract 

Medical education is witnessing changes across the globe to produce more competent and responsive medical graduates to meet patients' growing needs. Medical educators are aspiring for more objective and relevant assessment methods for the evaluation of medical graduates. Objective structured clinical examination (OSCE) introduced in the year 1975 for clinical evaluation has gone through many changes over the past 45 years and matured over time. The article describes the challenges of OSCE as a tool in medical assessment from students' and organizers' perspectives. We have also suggested a feasible solution to address the challenges while conducting OSCE to assess medical students.

Keywords: Clinical competence, education, graduate, medical, objective structured clinical examination


How to cite this article:
Behera P, Gopi G, Siddaiah A, Sridhar PR, Patro BK, Subba SH. Challenges of objective structured clinical examination as a tool in medical assessment. Indian J Community Fam Med 2022;8:156-60

How to cite this URL:
Behera P, Gopi G, Siddaiah A, Sridhar PR, Patro BK, Subba SH. Challenges of objective structured clinical examination as a tool in medical assessment. Indian J Community Fam Med [serial online] 2022 [cited 2023 May 28];8:156-60. Available from: https://www.ijcfm.org/text.asp?2022/8/2/156/366545


  Introduction Top


Modern medical education is aspiring to be a competency-based, student-driven, and patient-centered model. Assessment of clinical competency is a crucial part of modern medical education, and developing an efficient assessment technique is challenging. The three decades from 1965 to 1995 saw considerable growth in innovative educational efforts aimed at skill acquisition. Since the 1990s, Miller's pyramid of educational objectives and Bloom's cognitive taxonomy stood as the framework for assessing competence in medical education.[1] There has been a transition from the standard pen and paper, which mainly tested knowledge and facts, toward a more sophisticated evaluation.[1] Written examinations test only cognition, which is only one component in testing a student's competency.[2] The traditional clinical assessments are mainly confined to obtaining the patient's history, demonstrating physical examinations, and evaluating a narrow range of technical skills which has been well criticized in literature more than 40 years ago.[3] The examiner's subjective bias with different students, the dominant role luck plays while assigning a random clinical case to a student, and the variation in marking standards between examiners are some of the serious limitations.[4] Whether it is the students' assessment to provide feedback regarding their clinical skills and certify a level of achievement or evaluate the curriculum's effectiveness, the method of assessment/evaluation has a significant role to play. If the process of the evaluation is such that it mainly focuses on basic cognitive skills, then the students will less likely to be motivated to work on their affective skills (e.g., interpersonal/communication skills), which plays a considerable role in patient care. The outcome of such an assessment could result in a dissatisfied patient and, thus, poor health outcomes.

Harden and Gleeson[3] introduced the objective structured clinical examination (OSCE) in the year 1975. Described initially as a timed examination where medical students interact with a series of simulated patients (SPs) in stations that involved history taking, physical examination, counseling, or patient management, it now has broadened its scope. It has undergone many modifications to suit the circumstances.[2] It has been used to evaluate the most critical areas of health care, namely, the ability to perform a specific skill to obtain data, interpret data, problem-solving, teach, and communicate, and also the ability to handle unpredictable situations that cannot be evaluated using the conventional methods of evaluation. OSCE has become an accepted method for evaluating clinical competence. The candidates have to rotate through a series of stations. In a brief interval of time, the stations assess a particular domain of clinical competency. Every candidate is evaluated with a standardized checklist, meets the standardized patient, and is assessed by the same or an equivalent examiner. It also represents a direct and accurate means to determine the student's clinical skills and interaction with the patient. The various steps of the OSCE process are summarized in [Figure 1].
Figure 1: Flow diagram of OSCE process. OSCE: Objective structured clinical examination

Click here to view



  Challenges of Objective Structured Clinical Examination Top


General challenges

Extreme compartmentalization

Although OSCE is a great tool to assess clinical skill and practical knowledge,[2] when it comes to a particular question or station, OSCE has some limitations. In medicine, a holistic approach towards patient care is encouraged, rather than mere symptomatic treatment. Arriving at a final diagnosis that can explain the course of the patient's disease is the ultimate aim, and such a practice should be encouraged starting early during a student's medical education. Ideally, assessment should include the student's ability to check for various signs and correlate all together to arrive at a provisional diagnosis, order the needful investigations, interpret them, arrive at a final diagnosis, formulate a treatment plan and counsel the patient. However, time and resource constraints make it practically impossible to showcase a station with multiple signs and symptoms of a single disease and assess the abovementioned domains. For example, testing a student's ability to arrive at a diagnosis and manage a patient with tuberculosis would need extensive history taking and thorough physical examination focused on multiple organ systems (general examination, cardiovascular, gastrointestinal, and respiratory system). Following this, the student would have arrived at a few differentials. He would then have to order the specific laboratory investigations (e.g., sputum smear for acid-fast bacilli, chest x-ray, sputum culture, tuberculin skin test, or computed tomography [CT] scan of the thorax). Once the laboratory results are out, he should interpret them and identify the cause of ascites. Once identified, he should be able to develop a framework for further management. Finally, the most crucial part: communicating the diagnosis and the proposed treatment plan to the patient, and allaying the patient's concerns and doubts. All these steps are needed to be assessed to determine the student's overall competency as a clinician and his/her ability to utilize theoretical knowledge in clinical practice. OSCE cannot assess all these domains together in an integrated manner.

Limited reproducibility of many real-life scenarios

An expert panel develops each station in an OSCE, and SPs are used to imitate clinical scenarios. The SPs are healthy people and hence there is a limitation to the clinical scenarios that the SPs can act out. Although signs like pain can be acted out, and skin lesions can be reproduced through makeup, numerous other clinical signs cannot be reproduced, like breast lump, heart murmur, respiratory sounds, and abdominal mass. With the advancement of medical education technology, electronic media has been used to overcome some of the limitations in reproducing symptoms. But still, there are numerous crucial clinical signs that cannot be reproduced and hence cannot be assessed (e.g., hepatojugular reflux and Argyll Robertson pupil).

Organizers perspective

Resource-intensive exercise

OSCE is a resource-intensive technique to assess competency.[3] Besides funds for constructing stations, human resources and time required for preparation and execution are also more. It requires careful planning, setting up the different stations, training the SPs, training the evaluators, procuring different materials (e.g., manikins and instruments), and a general/core committee to oversee, and ensure the smooth completion of the evaluation.[3] There has to be coordination between all the members and ongoing supervision to deal with any adversities. A usual OSCE evaluation can have anywhere between 10 and 15 stations with an average time of 15 and 20 min per station.[5] Adequate, strategically placed breaks are often needed to prevent fatigue and underperformance of both students and the person assessing them. Therefore, almost an entire day is usually used up to complete an OSCE evaluation. The increased number of trained professionals and the resources needed for an OSCE exam necessitate more monetary support, making it costlier than other alternative forms of assessment.

Content selection and design

The main advantage of OSCE is its ability to test the psychomotor and affective domains.[3] Hence, to develop an OSCE exam that can adequately test the student's competency needs careful planning and designing.[6] A station's objectives should be defined first, and then a clinical scenario developed around it. Some of the stations would require the student to interpret specific props like images/multimedia. When such a station has to be designed, the accessories should be selected first, and then the scenario developed around it. Searching for props to support an already-created clinical situation is very difficult. A case may have to be revised extensively to match the available props. Time allotment for each station has to be defined according to the assessment's complexity. One of the significant limitations of any method of evaluation is the predictability with repetition. OSCE is not an exception to this. With repeated OSCEs, the common signs and procedures asked to be elicited become predictable to the students. This becomes a significant concern since the total pool of questions/assessment modules that can be used in OSCEs are very small compared with other methods of assessment.

Other important challenges that have to be addressed are as follows:

  1. Framing of stem/questions of OSCEs can be difficult even for an experienced teacher unless trained adequately as the question stem needs to be concise, and it should be such that the candidate will know what to do, for example, a station to assess the student's counseling skill regarding breast self-examination should be designed in such a way that the patient represent population characteristics and literacy of the local community
  2. Preparation of a well-inclusive checklist where all items are demonstrable can be difficult. The number of items in the checklist should correlate to the time provided. The checklist should be tailored to the case, for example, one item or multiple items regarding immunization counseling. A useful checklist has characteristics of discreet, observable, and dichotomous. Checklist language and detail should be clear, for example, instead of just giving “side effects” as a component in the checklist, it should mention as “local reactions like swelling, risk of mild allergic reaction like itching or as severe as anaphylaxis.”


Absence of a trained professional for evaluation

It is neither feasible nor impractical to keep a certified physician in each station to assess the student's skills. Usually, the evaluators are nonclinicians who have been trained to evaluate the student based on the checklist. This narrows the domains that can be tested, for example, a student may adopt a different technique to palpate the liver, which a clinician would understand. However, if such a method is not included in the checklist or the evaluator is unaware of it, the chances of marking that student as not competent are very high. Furthermore, no credits are given if the student performs certain additional examinations that are not included in the checklist, as the evaluator may not be trained to interpret it. However, a clinician can understand the need for such a procedure and can appreciate/give extra credit to the student for doing so. Checklists are not as practical as they were thought to be.[7] Regehr et al. compared the psychometric properties of checklists and global rating scales for assessing competencies on an OSCE format examination.[8] They concluded that “global rating scales scored by experts showed higher interstation reliability, better construct validity, and better concurrent validity than did checklists. Similarly, Van der Vleuten et al. concluded from various sources that “objectified” measures did not consistently lead to higher reliability.[9] The reliability of using checklists for assessment has been well discussed by Cunnington et al. who concluded that global ratings by experts might even be superior to checklists.[10] However, the possibility of having an expert for each station is almost unfathomable.

Limited knowledge assessment

While OSCEs remain superior for the evaluation of affective domains, it cannot adequately assess the student's knowledge.[3] Traditional examination techniques like question and answer tests or multiple-choice questions are still superior if an adequate and vast evaluation of knowledge is needed. OSCE should not be seen as a replacement to – for example, a case presentation or viva; the saying “the eyes cannot see what the brain doesn't know” sheds light on the importance of acquiring knowledge and information. Medical science is evolving continuously, and clinicians must stay up to date with the latest information.

Organizational training

Multiple personnel are required to ensure the smooth conduct of an OSCE exam. This includes the core panel of medical education specialists, clinicians to develop clinical scenarios, evaluators to run the checklist, SPs, and other supporting staff. All personnel must undergo basic and advanced training per the roles and duties they are expected to perform. The most difficult process is the SPs and evaluators' training to ensure the expected clinical scenario and proper evaluation of the student.

Students perspective

Need for prior exposure

Unlike the conventional techniques of evaluation, OSCE is a relatively new model. The students should be familiarized with it before their actual exam. There is a need to explain the entire process of evaluation, its benefits, and how to perform in it. Unless the students know what is expected out of them, much time gets wasted when the student has to think and analyze what to do. Since each OSCE is different, instructions and orientation have to be given before each evaluation, and a generalized rule/guideline cannot be developed.

Focused preparation

The preparation for an OSCE exam differs from other conventional exams. The focus has to be mainly on interpreting, analyzing, and performing clinical examinations in a clinical scenario. Efforts are made to resemble actual clinical scenarios, and the student must be aware of how to evaluate and handle such a situation. OSCE would need the student to perform a procedure rather than asking him to describe the method. This calls for better preparation focused on the correct technique to perform a procedure.

Stressful situation

In general, OSCE is a stressful situation for students compared to conventional examinations. OSCEs involve actual people and evaluators. Performing a procedure in front of an evaluator is a stressful situation for the students. Although the evaluators merely observe the students, some of their body language can induce more anxiety.[11] The length of an OSCE can last anywhere between 5-8 h, and such a long period puts immense pressure and fatigue on the students.

The challenges related to OSCE are summarized in [Panel 1].



Way forward

The OSCE, as a clinical assessment method, stands superior to traditional methods for its objectivity, uniformity, and versatility of clinical scenarios that can be assessed. However, it is not without limitations. A poorly planned and executed OSCE would lead to resource wastage and poor evaluation. In India, the prevalent system of evaluation in medical education is pen and paper-based tests and viva voce. OSCEs are either not used at all or used merely as supplementary tools without any significant contribution to the evaluation outcome. This could be due to a lack of expertise in the area or resource constraints. The time has come to incorporate OSCE into assessment medical education not merely as an adjuvant to the current evaluation techniques, but as one of the core pillars of the clinical skill evaluation process. The OSCEs should be part of formative and summative assessments. Teacher training in medical education should include OSCE on a priority basis.

The reliability and validity of the OSCE need to be examined. Perhaps nationally developed OSCEs, based on the current evidence, maybe advocated as an optimal way to ensure the minimum required competencies at each level of medical education. This involves forming a panel of experts from nationwide to develop a core framework that can then be tweaked by the conducting institutions to match their goals. This can be done by collecting the OSCE data from various institutions and then coding them with unique identifiers after a rigorous review by experts. The institutions with limited resources or expertise can then use these cases from the repositories for the assessment. The unique identifier for each case ensures validity across the institutions nationwide.

Although OSCE cannot be considered the “gold standard” for assessing competency, when combined with the traditional system of assessment, it is superior to the prevalent evaluation methods. There is a need for further research for developing a well-designed, reliable, and valid assessment technique that would equally assess the theoretical knowledge and clinical skill in medical education.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Turner JL, Dankoski ME. Objective structured clinical exams: A critical review. Fam Med 2008;40:574-8.  Back to cited text no. 1
    
2.
Zayyan M. Objective structured clinical examination: The assessment of choice. Oman Med J 2011;26:219-22.  Back to cited text no. 2
    
3.
Harden RM, Gleeson FA. Assessment of clinical competence using an Objective Structured Clinical Examination (OSCE). Med Educ 1979;13:41-54.  Back to cited text no. 3
    
4.
Wilson GM, Lever R, Harden RM, Robertson JI. Examination of clinical examiners. Lancet 1969;1:37-40.  Back to cited text no. 4
    
5.
Swing SR. The ACGME outcome project: Retrospective and prospective. Med Teach 2007;29:648-54.  Back to cited text no. 5
    
6.
Wass V, van der Vleuten C. The long case. Med Educ 2004;38:1176-80.  Back to cited text no. 6
    
7.
Reznick RK, Regehr G, Yee G, Rothman A, Blackmore D, Dauphinée D. Process-rating forms versus task-specific checklists in an OSCE for medical licensure. Medical council of Canada. Acad Med 1998;73:S97-9.  Back to cited text no. 7
    
8.
Regehr G, MacRae H, Reznick RK, Szalay D. Comparing the psychometric properties of checklists and global rating scales for assessing performance on an OSCE-format examination. Acad Med 1998;73:993-7.  Back to cited text no. 8
    
9.
Van der Vleuten CP, Norman GR, De Graaff E. Pitfalls in the pursuit of objectivity: Issues of reliability. Med Educ 1991;25:110-8.  Back to cited text no. 9
    
10.
Cunnington JP, Neville AJ, Norman GR. The risks of thoroughness: Reliability and validity of global ratings and checklists in an OSCE. Adv Health Sci Educ Theory Pract 1996;1:227-33.  Back to cited text no. 10
    
11.
Verma M, Singh T. Attitudes of medical students towards Objective Structured Clinical Examination (OSCE) in pediatrics. Indian Pediatr 1993;30:1259-61.  Back to cited text no. 11
    


    Figures

  [Figure 1]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
Challenges of Ob...
References
Article Figures

 Article Access Statistics
    Viewed471    
    Printed32    
    Emailed0    
    PDF Downloaded49    
    Comments [Add]    

Recommend this journal