Open Access. Powered by Scholars. Published by Universities.®

Medical Education Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 29 of 29

Full-Text Articles in Medical Education

Assessment In The Interpersonal Domain: Experiences From Empathy Assessment In Medical Education, Neville Chiavaroli Aug 2019

Assessment In The Interpersonal Domain: Experiences From Empathy Assessment In Medical Education, Neville Chiavaroli

Neville Chiavaroli

Frameworks for the teaching and assessment of 21st-century skills commonly recognise the importance of learning and skill development in the interpersonal domain. They also usually acknowledge the challenge of reliably and validly assessing students in this domain. In the field of medical education and in selecting students for medical courses, the concept of empathy has become central to representing the particular interpersonal understandings and skills expected of students and practising doctors. Attempts to assess these attributes during medical training are just as challenging as in school contexts. This presentation draws on several years’ experience of working with medical educators to …


Developing A Global Health Assessment Collaboration: Ancillary Report, Daniel Edwards, Jacob Pearce, David Wilkinson Aug 2018

Developing A Global Health Assessment Collaboration: Ancillary Report, Daniel Edwards, Jacob Pearce, David Wilkinson

Dr Daniel Edwards

This document reports on a project designed to develop an assessment collaboration between medical schools in both Australia and the United Kingdom. The project was funded by the Office for Learning and Teaching (OLT), utilising surplus funding from a broader assessment collaboration project – the Australian Medical Assessment Collaboration (OLT ID12-2482). The Global Health Assessment Collaboration (GHAC) involved five universities in Australia and the United Kingdom (UK). It developed an assessment framework and item specifications, undertook assessment item drafting workshops, built in a process of review and resulted in the development of a focused suite of assessment items. This report …


Effect Of Trial Items On Candidate Performances In A Large-Scale Postgraduate Medical Selection Test, Luc T. Le Jun 2018

Effect Of Trial Items On Candidate Performances In A Large-Scale Postgraduate Medical Selection Test, Luc T. Le

Dr Luc Tu Le

The Graduate Australian Medical School Admissions Test (GAMSAT) is a cognitive test developed by the Australian Council for Educational Research (ACER) for the Consortium of Graduate-entry Medical Schools. GAMSAT consists of two writing tasks and two multiple-choice (MC) sections: Reasoning in Humanities and Social Sciences (75 items), and Reasoning in Biological and Physical Sciences (110 items). In each administration, each of the two MC sections includes different test booklets with the same cored items but different sets of trialled items. The trial item sets have been aimed to be equivalent in contents and difficulty levels across the test booklets. This …


Select Readiness: Assessing The Clinical Learning Environment Of A Regional Branch Medical Campus, Margaret A. Hadinger Edd, Ms, Erica T. Mahady Ma, Edward Norris Md, Fapm, J Alan Otsuki Md, Mba Apr 2016

Select Readiness: Assessing The Clinical Learning Environment Of A Regional Branch Medical Campus, Margaret A. Hadinger Edd, Ms, Erica T. Mahady Ma, Edward Norris Md, Fapm, J Alan Otsuki Md, Mba

Margaret A. Hadinger, EdD, MS

No abstract provided.


Inter-Rater Variability As Mutual Disagreement: Identifying Raters’ Divergent Points Of View, A. Gingerich, Susan E. Ramlo Dec 2015

Inter-Rater Variability As Mutual Disagreement: Identifying Raters’ Divergent Points Of View, A. Gingerich, Susan E. Ramlo

Susan E Ramlo

Whenever multiple observers provide ratings, even of the same performance,
inter-rater variation is prevalent. The resulting ‘idiosyncratic rater variance’ is considered
to be unusable error of measurement in psychometric models and is a threat to the
defensibility of our assessments. Prior studies of inter-rater variation in clinical assessments
have used open response formats to gather raters’ comments and justifications. This
design choice allows participants to use idiosyncratic response styles that could result in a
distorted representation of the underlying rater cognition and skew subsequent analyses. In
this study we explored rater variability using the structured response format of Q
methodology. …


The Rationale For And Use Of Assessment Frameworks: Improving Assessment And Reporting Quality In Medical Education, Jacob Pearce, Daniel Edwards, Julian Fraillon, Hamish Coates, Benedict Canny, David Wilkinson Jun 2015

The Rationale For And Use Of Assessment Frameworks: Improving Assessment And Reporting Quality In Medical Education, Jacob Pearce, Daniel Edwards, Julian Fraillon, Hamish Coates, Benedict Canny, David Wilkinson

Julian Fraillon

An assessment framework provides a structured conceptual map of the learning outcomes of a programme of study along with details of how achievement of the outcomes can be measured. The rationale for using frameworks to underpin the targeting of essential content components is especially relevant for the medical education community. Frameworks have the capacity to improve validity and reliability in assessment, allowing test developers to more easily create robust assessment instruments. The framework used by the Australian Medical Assessment Collaboration (AMAC) is an interesting and relevant case study for the international community as it draws and builds on established processes …


The Rationale For And Use Of Assessment Frameworks: Improving Assessment And Reporting Quality In Medical Education, Jacob Pearce, Daniel Edwards, Julian Fraillon, Hamish Coates, Benedict Canny, David Wilkinson Jun 2015

The Rationale For And Use Of Assessment Frameworks: Improving Assessment And Reporting Quality In Medical Education, Jacob Pearce, Daniel Edwards, Julian Fraillon, Hamish Coates, Benedict Canny, David Wilkinson

Dr Jacob Pearce

An assessment framework provides a structured conceptual map of the learning outcomes of a programme of study along with details of how achievement of the outcomes can be measured. The rationale for using frameworks to underpin the targeting of essential content components is especially relevant for the medical education community. Frameworks have the capacity to improve validity and reliability in assessment, allowing test developers to more easily create robust assessment instruments. The framework used by the Australian Medical Assessment Collaboration (AMAC) is an interesting and relevant case study for the international community as it draws and builds on established processes …


An Investigation Into The Use Of Filmed Scenarios For The Testing Of ‘Understanding People’ In Medical Selection Tests, Jennifer Bryce, Judy Nixon Mar 2015

An Investigation Into The Use Of Filmed Scenarios For The Testing Of ‘Understanding People’ In Medical Selection Tests, Jennifer Bryce, Judy Nixon

Dr Jennifer Bryce

No abstract provided.


Gamsat: A 10-Year Retrospective Overview, With Detailed Analysis Of Candidates¿ Performance In 2014, Annette Mercer, Brendan Crotty, Louise Alldridge, Luc Le, Veronica Vele Mar 2015

Gamsat: A 10-Year Retrospective Overview, With Detailed Analysis Of Candidates¿ Performance In 2014, Annette Mercer, Brendan Crotty, Louise Alldridge, Luc Le, Veronica Vele

Dr Luc Tu Le

Background: The Graduate Australian Medical Schools Admission Test (GAMSAT) is undertaken annually in centres around Australia and a small number of overseas locations. Most Australian graduate entry medical schools also use Grade Point Average and interview score for selection. The aim of this study was to review the performance of the GAMSAT over the last 10 years; the study provides an analysis of the impact of candidates’ gender, age, language background, level of academic qualification and background discipline on performance; and details on the performance of higher-scoring candidates. These analyses were undertaken on the 2014 data; and trends in the …


Assessment Of Medical Students’ Learning Outcomes In Australia : Current Practice, Future Possibilities, David Wilkinson, Benedict Canny, Jacob Pearce, Hamish Coates, Daniel Edwards Feb 2015

Assessment Of Medical Students’ Learning Outcomes In Australia : Current Practice, Future Possibilities, David Wilkinson, Benedict Canny, Jacob Pearce, Hamish Coates, Daniel Edwards

Dr Jacob Pearce

All 19 medical schools in Australia examine and assess the performance of their students, but do so largely in isolation from each other. That is, most schools design, develop and deliver their own exams, against their own curriculum and standards, and students pass, fail and are graded with little external moderation or comparison. Accreditation of schools by the Australian Medical Council (AMC) provides some reassurance that assessment practices are appropriate in medical schools. However, very limited data are available for benchmarking performance against any national standard, or between medical schools in Australia. The Australian Medical Assessment Collaboration has been designed …


Determining The Quality Of Assessment Items In Collaborations: Aspects To Discuss To Reach Agreement Developed By The Australian Medical Assessment Collaboration, Lambert Schuwirth, Jacob Pearce Feb 2015

Determining The Quality Of Assessment Items In Collaborations: Aspects To Discuss To Reach Agreement Developed By The Australian Medical Assessment Collaboration, Lambert Schuwirth, Jacob Pearce

Dr Jacob Pearce

The Australian Medical Assessment Collaboration (AMAC) project, funded by the Office of Learning and Teaching, seeks to provide an infrastructure and a road map to support collaboration between Australian medical schools in matters of assessment. This may not seem very new perhaps, because there are already several collaborations taking place in Australia, and, typically, they relate to joint item banks, (such as the IDEAL consortium), or joint test administration, (such as the International Foundation of Medicine tests). The AMAC project seeks to build on these existing collaborations in two ways: first, by tying these initiatives together and thus bundling the …


Same Admissions Tools, Different Outcomes : A Critical Perspective On Predictive Validity In Three Undergraduate Medical Schools, Daniel Edwards, Tim Friedman, Jacob Pearce Feb 2015

Same Admissions Tools, Different Outcomes : A Critical Perspective On Predictive Validity In Three Undergraduate Medical Schools, Daniel Edwards, Tim Friedman, Jacob Pearce

Dr Jacob Pearce

Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions. This study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude …


The Rationale For And Use Of Assessment Frameworks: Improving Assessment And Reporting Quality In Medical Education, Jacob Pearce, Daniel Edwards, Julian Fraillon, Hamish Coates, Benedict Canny, David Wilkinson Dec 2014

The Rationale For And Use Of Assessment Frameworks: Improving Assessment And Reporting Quality In Medical Education, Jacob Pearce, Daniel Edwards, Julian Fraillon, Hamish Coates, Benedict Canny, David Wilkinson

Dr Daniel Edwards

An assessment framework provides a structured conceptual map of the learning outcomes of a programme of study along with details of how achievement of the outcomes can be measured. The rationale for using frameworks to underpin the targeting of essential content components is especially relevant for the medical education community. Frameworks have the capacity to improve validity and reliability in assessment, allowing test developers to more easily create robust assessment instruments. The framework used by the Australian Medical Assessment Collaboration (AMAC) is an interesting and relevant case study for the international community as it draws and builds on established processes …


Improving The Quality Of Medical Education, Daniel Edwards Oct 2014

Improving The Quality Of Medical Education, Daniel Edwards

Dr Daniel Edwards

An ongoing collaboration is developing tools and processes to help prove and improve the quality of medical education in Australia through quality comparison, the sharing of expertise and high-quality assessment, as Dan Edwards explains.


Australian Medical Assessment Collaboration: From Proof Of Concept To Proof Of Sustainability: Final Report 2014, Daniel Edwards, David Wilkinson Sep 2014

Australian Medical Assessment Collaboration: From Proof Of Concept To Proof Of Sustainability: Final Report 2014, Daniel Edwards, David Wilkinson

Dr Daniel Edwards

This is the final report for AMAC-2, entitled Australian Medical Assessment Collaboration: from proof of concept to proof of sustainability (OLT project ID12-2482). This project advanced previous work funded by the ALTC and was undertaken from early 2013 to mid 2014. AMAC-2 took the proof of concept achieved through the initial AMAC project with the aim of building an ongoing, sustainable and successful collaboration between medical schools in Australia and New Zealand.


Implementing Common Assessment: Lessons And Models From Amac Developed By The Australian Medical Assessment Collaboration, Daniel Edwards Sep 2014

Implementing Common Assessment: Lessons And Models From Amac Developed By The Australian Medical Assessment Collaboration, Daniel Edwards

Dr Daniel Edwards

The aim of this document is to provide insight into the implementation of common assessments in higher education in order to assist in future work on conducting these kinds of projects. The discussion here draws heavily on the AMAC experience, attempting to broaden the learning from this project for use in future collaborations. The focus of this project has been on medical education, and as such, much of the detail is related to this field. However, it is hoped that the general ideas discussed here can be seen as informative for other fields and disciplines in higher education and at …


Determining The Quality Of Assessment Items In Collaborations: Aspects To Discuss To Reach Agreement, Lambert Schuwirth, Jacob Pearce Aug 2014

Determining The Quality Of Assessment Items In Collaborations: Aspects To Discuss To Reach Agreement, Lambert Schuwirth, Jacob Pearce

Dr Jacob Pearce

No abstract provided.


Mental Health Of University Students: Perspectives For Intervention And Prevention: An Indo-Canadian Collaborative Project, Amresh Srivastava, Rahel Eynan, Ravi Shah, Laxaman Dutt, Shubhangi Parkar, Tss Rao, Dp Giridhar, Rakesh Bhandari, Nagesh Bhandari, Paul Link May 2014

Mental Health Of University Students: Perspectives For Intervention And Prevention: An Indo-Canadian Collaborative Project, Amresh Srivastava, Rahel Eynan, Ravi Shah, Laxaman Dutt, Shubhangi Parkar, Tss Rao, Dp Giridhar, Rakesh Bhandari, Nagesh Bhandari, Paul Link

Amresh Srivastava

Purpose: The study aimed to determine the levels of psychological distress of university students and examine teachers’ awareness and opinions concerning suicide prevention. Methods: The study used a two-phase, sequential mixed-method approach of converging quantitative and qualitative methodologies. In the quantitative study the 1a2-item General Health Questionnaire (GHQ-12) was used to measure psychological wellbeing in a student sample ( n=110 ). The qualitative study consisted of a focus group with students (n=200) and faculty members. (n=25). Results: The scores for the sample ranged between 0- 33 with a mean score of 10.25 (SD= 6.14). The majority of respondents (70.6%) endorsed …


Predicting Success In Medical Studies, Daniel Edwards Feb 2014

Predicting Success In Medical Studies, Daniel Edwards

Dr Daniel Edwards

Daniel Edwards discusses the findings of a multi-institution investigation of the ability of Australia’s medical school admissions processes to predict future achievement levels.


Developing Outcomes Assessments For Collaborative, Cross-Institutional Benchmarking : Progress Of The Australian Medical Assessment Collaboration, Daniel Edwards, David Wilkinson, Benedict Canny, Jacob Pearce, Hamish Coates Jan 2014

Developing Outcomes Assessments For Collaborative, Cross-Institutional Benchmarking : Progress Of The Australian Medical Assessment Collaboration, Daniel Edwards, David Wilkinson, Benedict Canny, Jacob Pearce, Hamish Coates

Dr Daniel Edwards

The Australian Medical Assessment Collaboration (AMAC) began in 2010. This article charts the development of the collaboration over its initial years. AMAC was instigated as a way of improving the quality of medical education through the recognition of the need for tools for comparison and evaluation of learning outcomes, acknowledgement of the need for high quality assessment, and to share expertise in these areas. In a climate of increasing regulation and accountability, this collaboration was formed as a means of increasing assessment practices by, with and for medical schools. This article provides an overview of the background issues stimulating the …


Same Admissions Tools, Different Outcomes : A Critical Perspective On Predictive Validity In Three Undergraduate Medical Schools, Daniel Edwards, Tim Friedman, Jacob Pearce Jan 2014

Same Admissions Tools, Different Outcomes : A Critical Perspective On Predictive Validity In Three Undergraduate Medical Schools, Daniel Edwards, Tim Friedman, Jacob Pearce

Dr Tim Friedman

Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions. This study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude …


Select Readiness: Assessing The Clinical Learning Environment Of A Regional Branch Medical Campus, Margaret A. Hadinger Edd, Ms, Erica T. Mahady Ma, Edward Norris Md, Fapm, J Alan Otsuki Md, Mba Nov 2013

Select Readiness: Assessing The Clinical Learning Environment Of A Regional Branch Medical Campus, Margaret A. Hadinger Edd, Ms, Erica T. Mahady Ma, Edward Norris Md, Fapm, J Alan Otsuki Md, Mba

Edward R Norris MD, FAPA, FAPM

No abstract provided.


Resident Orientation: A Baseline Assessment, Amy B. Smith Phd, James P. Orlando Edd, Julie Dostal Md, Joseph E. Patruno Md Apr 2013

Resident Orientation: A Baseline Assessment, Amy B. Smith Phd, James P. Orlando Edd, Julie Dostal Md, Joseph E. Patruno Md

Amy B Smith PhD

No abstract provided.


Collaborative Assessment For Learning, Jacob Pearce Mar 2013

Collaborative Assessment For Learning, Jacob Pearce

Dr Jacob Pearce

No abstract provided.


Same Admissions Tools, Different Outcomes : A Critical Perspective On Predictive Validity In Three Undergraduate Medical Schools, Daniel Edwards, Tim Friedman, Jacob Pearce Dec 2012

Same Admissions Tools, Different Outcomes : A Critical Perspective On Predictive Validity In Three Undergraduate Medical Schools, Daniel Edwards, Tim Friedman, Jacob Pearce

Dr Daniel Edwards

Admission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions. This study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude …


Assessment Of Medical Students’ Learning Outcomes In Australia : Current Practice, Future Possibilities, David Wilkinson, Benedict Canny, Jacob Pearce, Hamish Coates, Daniel Edwards Dec 2012

Assessment Of Medical Students’ Learning Outcomes In Australia : Current Practice, Future Possibilities, David Wilkinson, Benedict Canny, Jacob Pearce, Hamish Coates, Daniel Edwards

Dr Daniel Edwards

All 19 medical schools in Australia examine and assess the performance of their students, but do so largely in isolation from each other. That is, most schools design, develop and deliver their own exams, against their own curriculum and standards, and students pass, fail and are graded with little external moderation or comparison. Accreditation of schools by the Australian Medical Council (AMC) provides some reassurance that assessment practices are appropriate in medical schools. However, very limited data are available for benchmarking performance against any national standard, or between medical schools in Australia. The Australian Medical Assessment Collaboration has been designed …


Development Of A Peer Teaching-Assessment Program And A Peer Observation And Evaluation Tool, Jennifer M. Trujillo, Margarita V. Divall, Judith T. Barr, Michael J. Gonyeau, Jenny A. Van Amburgh, S. James Matthews, Donna M. Qualters Apr 2012

Development Of A Peer Teaching-Assessment Program And A Peer Observation And Evaluation Tool, Jennifer M. Trujillo, Margarita V. Divall, Judith T. Barr, Michael J. Gonyeau, Jenny A. Van Amburgh, S. James Matthews, Donna M. Qualters

Samuel James Matthews

Objectives. To develop a formalized, comprehensive, peer-driven teaching assessment program and a valid and reliable assessment tool. Methods. A volunteer taskforce was formed and a peer-assessment program was developed using a multistep, sequential approach and the Peer Observation and Evaluation Tool (POET). A pilot study was conducted to evaluate the efficiency and practicality of the process and to establish interrater reliability of the tool. Intra-class correlation coefficients (ICC) were calculated. Results. ICCs for 8 separate lectures evaluated by 2-3 observers ranged from 0.66 to 0.97, indicating good interrater reliability of the tool. Conclusion. Our peer assessment program for large classroom …


Development Of A Peer Teaching-Assessment Program And A Peer Observation And Evaluation Tool, Jennifer M. Trujillo, Margarita V. Divall, Judith T. Barr, Michael J. Gonyeau, Jenny A. Van Amburgh, S. James Matthews, Donna M. Qualters Apr 2012

Development Of A Peer Teaching-Assessment Program And A Peer Observation And Evaluation Tool, Jennifer M. Trujillo, Margarita V. Divall, Judith T. Barr, Michael J. Gonyeau, Jenny A. Van Amburgh, S. James Matthews, Donna M. Qualters

Margarita V. DiVall

Objectives. To develop a formalized, comprehensive, peer-driven teaching assessment program and a valid and reliable assessment tool. Methods. A volunteer taskforce was formed and a peer-assessment program was developed using a multistep, sequential approach and the Peer Observation and Evaluation Tool (POET). A pilot study was conducted to evaluate the efficiency and practicality of the process and to establish interrater reliability of the tool. Intra-class correlation coefficients (ICC) were calculated. Results. ICCs for 8 separate lectures evaluated by 2-3 observers ranged from 0.66 to 0.97, indicating good interrater reliability of the tool. Conclusion. Our peer assessment program for large classroom …


Development Of A Peer Teaching-Assessment Program And A Peer Observation And Evaluation Tool, Jennifer M. Trujillo, Margarita V. Divall, Judith T. Barr, Michael J. Gonyeau, Jenny A. Van Amburgh, S. James Matthews, Donna M. Qualters Apr 2012

Development Of A Peer Teaching-Assessment Program And A Peer Observation And Evaluation Tool, Jennifer M. Trujillo, Margarita V. Divall, Judith T. Barr, Michael J. Gonyeau, Jenny A. Van Amburgh, S. James Matthews, Donna M. Qualters

Jenny A. Van Amburgh

Objectives. To develop a formalized, comprehensive, peer-driven teaching assessment program and a valid and reliable assessment tool. Methods. A volunteer taskforce was formed and a peer-assessment program was developed using a multistep, sequential approach and the Peer Observation and Evaluation Tool (POET). A pilot study was conducted to evaluate the efficiency and practicality of the process and to establish interrater reliability of the tool. Intra-class correlation coefficients (ICC) were calculated. Results. ICCs for 8 separate lectures evaluated by 2-3 observers ranged from 0.66 to 0.97, indicating good interrater reliability of the tool. Conclusion. Our peer assessment program for large classroom …