AERA 2023 - Virtual Research Learning Series

  • Contains 1 Component(s) Recorded On: 06/01/2023

    This course will introduce advanced methods in meta-analysis. Topics covered include models for handling multiple effect sizes per study (dependent effect sizes) and exploring heterogeneity, the use of meta-analysis structural equation modeling (MASEM), and an introduction to single-case experimental design meta-analysis. The statistical package R will be used to conduct the statistical techniques discussed. Participants are encouraged to bring their own research in progress to the course. The activities will include lecture, hands-on exercises, and individual consultation. The target audience are those researchers with systematic review and meta-analysis experience, but who are interested in learning advanced methods for meta-analysis. Knowledge of basic descriptive statistics, systematic review, and basic meta-analysis is assumed.

  • Contains 1 Component(s) Recorded On: 06/15/2023

    The Trajectories into Early Career Research dataset contains 8 years of surveys (biweekly and annual), interviews, and performance-based data from a national cohort of 336 Ph.D. students who matriculated into U.S. biological sciences programs in Fall, 2014. These deidentified data will be publicly released on the Open Science Framework data repository in 2023. This course will (1) teach participants how to access data and documentation, (2) introduce the instruments, interview protocols, and data formats, (3) provide instruction and code to prepare data for analysis, and (4) facilitate discussions of participant-identified research questions and analytic techniques. The course consists of an overview lecture introducing the data set and major study findings to date, live demonstrations and hands-on practice accessing and structuring data. We recommend (but do not require) participants have data analysis software readily available. Participants will leave the course with downloaded, pre-processed data appropriate to their research questions/methods, reference materials to support future data access and analysis, and copies of literature reporting key methods and findings from the data set. The course is geared toward graduate students and early-mid career scholars—especially those whose access to data was disrupted by the pandemic—with interests in postsecondary education, transitions into STEM careers, adult learning and motivation, research training, and/or longitudinal or mixed methods analytic techniques.

  • Contains 1 Component(s) Recorded On: 07/11/2023

    Labeling or classifying textual data is an expensive and consequential challenge for Mixed Methods and Qualitative researchers. The rigor and consistency behind the construction of these labels may ultimately shape research findings and conclusions. A methodological conundrum to address this challenge is the need for human reasoning for classification that leads to deeper and more nuanced understandings, but at the same time manual human classification comes with the well-documented increase in classification inconsistencies and errors, particularly when dealing with vast amounts of texts and teams of coders. This course offers an analytic framework designed to leverage the power of machine learning to classify textual data while also leveraging the importance of human reasoning in this classification process. This framework was designed to mirror as close as possible the line-by-line coding employed in manual code identification, but relying instead on latent Dirichlet allocation, text mining, MCMC, Gibbs sampling and advanced data retrieval and visualization. A set of analytic output provides complete transparency of the classification process and aids to recreate the contextualized meanings embedded in the original texts. Prior to the course participants are encouraged to read these two articles: González Canché, M. S. (2023). Machine Driven Classification of Open-Ended Responses (MDCOR): An analytic framework and free software application to classify longitudinal and cross-sectional text responses in survey and social media research. Expert Systems with Applications, 215. https://doi.org/10.1016/j.eswa.2022.119265 González Canché, M. S. (2023). Latent Code Identification (LACOID): A machine learning-based integrative framework [and open-source software] to classify big textual data, rebuild contextualized/unaltered meanings, and avoid aggregation bias. International Journal of Qualitative Methods, 22. https://doi.org/10.1177/16094069221144940

  • Contains 1 Component(s) Recorded On: 08/10/2023

    This course is designed to introduce education researchers with little or no background in social network analysis (SNA) to social network theory, examples of network analysis in educational contexts, and applied experience analyzing real-world data sets. To support scholars’ conceptual understanding of SNA as both a theoretical perspective and an analytical method, the instructors will provide short presentations and facilitate peer discussion on topics ranging from broad applications of SNA in educational contexts to specific approaches for data collection and storage. This course will also provide scholars with applied experience analyzing network data through code-alongs and interactive case studies that use widely adopted tools (e.g., R, RStudio, and GitHub) and demonstrate common techniques (e.g, network visualization, measurement, and modeling). Collectively, these activities will help scholars both appreciate and experience how SNA can be used to understand and improve student learning and the contexts in which learning occurs. While prior experience with R, RStudio, and GitHub is recommended to complete more advanced activities, it is not required.

  • Contains 1 Component(s) Recorded On: 09/07/2023

    Qualitative meta-synthesis is a rigorous and innovative approach to analyzing findings from multiple qualitative education research studies that have been determined to meet pre-established criteria (e.g., area of research, methodology used). Instructors from the Institute for Meta-Synthesis will teach theory and techniques for qualitative meta-synthesis, with a main goal of preparing participants to interrogate their topics in education research towards equity. The instructors have successfully completed and published on multiple meta-synthesis projects on equity topics in STEM education; examples and activities will be based on their research data. Despite its potential to help address issues of equity in education and to provide policy guidance at the national level, meta-synthesis is a methodology that is rarely introduced to graduate students. This course will address this knowledge gap for graduate students by teaching participants how to conduct qualitative meta-synthesis research, with a particular emphasis on justice-oriented aims and equitable research practices using examples from STEM education. Furthermore, skills learned for meta-synthesis may be applied to other important research tasks, such as conducting searches for literature reviews. This course is geared towards graduate students and early career scholars. Those interested in participating in this course ideally should have familiarity with literature reviews and qualitative research literature, though that is not required.

  • Contains 1 Component(s) Recorded On: 09/21/2023

    This inquiry-based methods course will focus on how researchers can take a problem of practice or topic of interest and transform it into a set of researchable questions. Participants will come to the course with a research question, and we will focus on improving it by interrogating its main concepts and unit(s) of analysis. This course is particularly relevant to practitioner researchers, executive doctoral and master's students, and early-career educational researchers. This will be a creative, collaborative, and constructively critical space of inquiry and support. Taught by two established researchers who are faculty in doctoral programs in education, this seminar will be hands-on and supportively critical. The goal is for every participant to leave with a set of research questions and a plan for next steps in research design. Participants will be asked to submit their draft research questions or topics on a shared document prior to the course. Required material and software include a word-processing program and access to Padlet and Google Docs.

AERA 2022 - Virtual Research Learning Series

  • Contains 1 Component(s) Recorded On: 05/18/2022

    The purpose of this four-hour -course is to survey how qualitative data can be analyzed inductively through three different methods from the canon of qualitative inquiry heuristics: 1) codes and categories; 2) thematic analysis; and 3) assertion development. Participants will explore these methods by analyzing authentic data sets. The first is in vivo coding and categorizing an interview excerpt of a teacher’s ways of working with her students. The second is thematic analysis of a teacher’s narrative about her relationships with students. The third is the development of interpretive assertions about an ethical dilemma in psychological research. Additional workshop topics include writing analytic memos, constructing diagrams and matrices, and poetic inquiry. Participants will explore these course activities and objectives: 1. differentiate the following terms: qualitative data analysis, pattern, code, category, theme, theoretical construct, assertion, inference-making, vignette 2. code and categorize an interview transcript except 3. analyze an interview transcript excerpt thematically 4. develop interpretive assertions about a dialogic encounter over research ethics 5. write short analytic memos 6. construct a process diagram 7. compose a found data poem The workshop is targeted to graduate students and novices to qualitative research. Qualitative research instructors may also find utility with the workshop to experience new pedagogical methods with their students. No pre-course assignments or special materials are needed for this course.

  • Contains 1 Component(s)

    This course will introduce the unique design features of the National Assessment of Educational Progress (NAEP) and TIMSS data to researchers and provide guidance in data analysis strategies that they require, including the selection and use of appropriate plausible values, sampling weights, and variance estimation procedures (i.e., jackknife approaches). The course will provide participants with hands-on practice training in analyzing public-use NAEP and TIMSS data files using the R package EdSurvey, which was developed for analyzing national and international large-scale assessment data with complex psychometric and sampling designs. Participants will learn how to perform: • data process and manipulation, • descriptive statistics • cross tabulation and plausible value means, and • linear and logistic regression The knowledge and analytic approach learned from this course can be applied to analyzing other large-scale national and international data with plausible values. This course is designed for individuals in government, universities, private sector, and nonprofit organizations who are interested in learning how to analyze large-scale assessment data with plausible values. Participants should have at least basic knowledge of R software (e.g., took an entry level training on R programming) as well as statistical techniques including statistical inference and multiple regression. Having working knowledge of Item Response Theory and sampling theory is preferred. Participants need to have a computer preloaded with the latest version of the R and RStudio software to participate in the hands-on portion. Course Instructors Emmanuel Sikali, U.S. Department of Education, National Center for Education Statistics Paul Bailey, American Institutes for Research Ting Zhang, American Institutes for Research Michael Lee, American Institutes for Research Eric Buehler, American Institutes for Research Martin Hooper, American Institutes for Research

  • Contains 1 Component(s)

    This course will introduce advanced methods in meta-analysis. Topics covered include models for handling multiple effect sizes per study (dependent effect sizes) and exploring heterogeneity, the use of meta-analysis structural equation modeling (MASEM), and an introduction to single-case experimental design meta-analysis. The statistical package R will be used to conduct the statistical techniques discussed. Participants are encouraged to bring their own research in progress to the workshop. The activities will include lecture, hands-on exercises, and individual consultation. This course is designed to follow the introduction to systematic review and meta-analysis course given by the instructors in prior AERA Professional Development training sessions. The target audience is those researchers with systematic review and meta-analysis experience, but who are interested in learning advanced methods for meta-analysis. Knowledge of basic descriptive statistics, systematic review, and basic meta-analysis is assumed. Course Instructors Terri Pigott, Georgia State University Ryan Williams, American Institutes for Research Tasha Beretvas, The University of Texas at Austin Wim Van Den Noortgate, Katholieke Universiteit Leuven

  • Contains 1 Component(s)

    ​In 2017, the National Assessment of Educational Progress (NAEP) began its official transition to Digitally Based Assessment (DBA) format. The use of DBAs has enabled the recording of students’ interaction with assessment items (e.g., time on task, number of visits, response changes, interactions with graphics or interactive components), as well as with the test interface (use of support functions like drawing, etc.). Course participants will learn how to analyze NAEP Process Data using a process mining framework to understand students’ processes during the assessment. The course is designed for participants interested in using process data. The course is aimed at those with novice to advanced experience working with process data and a solid understanding of coding in R. Attendees will learn about NAEP assessment features, data manipulation and cleaning, sequence formation, which kinds of research questions can be addressed, and the analytic methods used in process mining approaches, specifically a) sequence clustering methods, b) business process mining algorithms, and c) natural language processing approaches.

AERA - Virtual Research Learning Series (2020-2022)

  • This inquiry-based methods course will focus on how researchers can take a problem of practice or topic of interest and transform it into a set of researchable questions. Participants will come to the course with a research question, and we will focus on improving it by interrogating its main concepts and unit(s) of analysis. This course is particularly relevant to practitioner researchers, executive doctoral and master's students, and early-career educational researchers. This will be a creative, collaborative, and constructively critical space of inquiry and support. Taught by two established researchers who are faculty in doctoral programs in education, this seminar will be hands-on and supportively critical. The goal is for every participant to leave with a set of research questions and a plan for next steps in research design. Participants will be asked to submit their draft research questions or topics on a shared document prior to the course. Required material and software include a word-processing program and access to Padlet and Google Docs.

  • Qualitative meta-synthesis is a rigorous and innovative approach to analyzing findings from multiple qualitative education research studies that have been determined to meet pre-established criteria (e.g., area of research, methodology used). Instructors from the Institute for Meta-Synthesis will teach theory and techniques for qualitative meta-synthesis, with a main goal of preparing participants to interrogate their topics in education research towards equity. The instructors have successfully completed and published on multiple meta-synthesis projects on equity topics in STEM education; examples and activities will be based on their research data. Despite its potential to help address issues of equity in education and to provide policy guidance at the national level, meta-synthesis is a methodology that is rarely introduced to graduate students. This course will address this knowledge gap for graduate students by teaching participants how to conduct qualitative meta-synthesis research, with a particular emphasis on justice-oriented aims and equitable research practices using examples from STEM education. Furthermore, skills learned for meta-synthesis may be applied to other important research tasks, such as conducting searches for literature reviews. This course is geared towards graduate students and early career scholars. Those interested in participating in this course ideally should have familiarity with literature reviews and qualitative research literature, though that is not required.

  • This course is designed to introduce education researchers with little or no background in social network analysis (SNA) to social network theory, examples of network analysis in educational contexts, and applied experience analyzing real-world data sets. To support scholars’ conceptual understanding of SNA as both a theoretical perspective and an analytical method, the instructors will provide short presentations and facilitate peer discussion on topics ranging from broad applications of SNA in educational contexts to specific approaches for data collection and storage. This course will also provide scholars with applied experience analyzing network data through code-alongs and interactive case studies that use widely adopted tools (e.g., R, RStudio, and GitHub) and demonstrate common techniques (e.g, network visualization, measurement, and modeling). Collectively, these activities will help scholars both appreciate and experience how SNA can be used to understand and improve student learning and the contexts in which learning occurs. While prior experience with R, RStudio, and GitHub is recommended to complete more advanced activities, it is not required.

  • Labeling or classifying textual data is an expensive and consequential challenge for Mixed Methods and Qualitative researchers. The rigor and consistency behind the construction of these labels may ultimately shape research findings and conclusions. A methodological conundrum to address this challenge is the need for human reasoning for classification that leads to deeper and more nuanced understandings, but at the same time manual human classification comes with the well-documented increase in classification inconsistencies and errors, particularly when dealing with vast amounts of texts and teams of coders. This course offers an analytic framework designed to leverage the power of machine learning to classify textual data while also leveraging the importance of human reasoning in this classification process. This framework was designed to mirror as close as possible the line-by-line coding employed in manual code identification, but relying instead on latent Dirichlet allocation, text mining, MCMC, Gibbs sampling and advanced data retrieval and visualization. A set of analytic output provides complete transparency of the classification process and aids to recreate the contextualized meanings embedded in the original texts. Prior to the course participants are encouraged to read these two articles: González Canché, M. S. (2023). Machine Driven Classification of Open-Ended Responses (MDCOR): An analytic framework and free software application to classify longitudinal and cross-sectional text responses in survey and social media research. Expert Systems with Applications, 215. https://doi.org/10.1016/j.eswa.2022.119265 González Canché, M. S. (2023). Latent Code Identification (LACOID): A machine learning-based integrative framework [and open-source software] to classify big textual data, rebuild contextualized/unaltered meanings, and avoid aggregation bias. International Journal of Qualitative Methods, 22. https://doi.org/10.1177/16094069221144940

  • The Trajectories into Early Career Research dataset contains 8 years of surveys (biweekly and annual), interviews, and performance-based data from a national cohort of 336 Ph.D. students who matriculated into U.S. biological sciences programs in Fall, 2014. These deidentified data will be publicly released on the Open Science Framework data repository in 2023. This course will (1) teach participants how to access data and documentation, (2) introduce the instruments, interview protocols, and data formats, (3) provide instruction and code to prepare data for analysis, and (4) facilitate discussions of participant-identified research questions and analytic techniques. The course consists of an overview lecture introducing the data set and major study findings to date, live demonstrations and hands-on practice accessing and structuring data. We recommend (but do not require) participants have data analysis software readily available. Participants will leave the course with downloaded, pre-processed data appropriate to their research questions/methods, reference materials to support future data access and analysis, and copies of literature reporting key methods and findings from the data set. The course is geared toward graduate students and early-mid career scholars—especially those whose access to data was disrupted by the pandemic—with interests in postsecondary education, transitions into STEM careers, adult learning and motivation, research training, and/or longitudinal or mixed methods analytic techniques.

AERA 2021 Virtual Research Learning Series

  • ​Recent developments in qualitative research include increasing analysis of multimodality. This course introduces scholars to multimodal analysis via social semiotics using diverse perspectives from multimodality and narrative, frame analysis, and nexus analysis. Course objectives include introduction to social semiotics and multimodality, basic techniques in analysis, and considerations of the role of theory. The target audience is graduate students, early career scholars, and advanced researchers who may have limited knowledge of multimodality and social semiotics and seek to learn about theories and analysis related to multimodality.

  • The purpose of this workshop is to train researchers and evaluators how to plan efficient and effective cluster and multisite randomized studies that probe hypotheses concerning main effects, mediation, and moderation. We focus on the conceptual logic and mechanics of multilevel studies and train participants in how to plan cluster and multisite randomized studies with adequate power to detect multilevel mediation, moderation, and main effects. We introduce participants to the free PowerUp! software programs designed to estimate the statistical power to detect mediation, moderation, and main effects across a wide range of designs. The workshop will combine lecture with hands-on practice with the free software programs. The target audience includes researchers and evaluators interested in planning and conducting multilevel studies that investigate mediation, moderation, or main effects. Participants should bring a laptop to the session.

  • ​This course will introduce the unique design features of large-scale assessment data and provide guidance in data analysis strategies, including the selection and use of appropriate plausible values, sampling weights, and variance estimation procedures (i.e., jackknife approaches). The course will provide participants with training virtually in analyzing public-use NAEP or TIMSS data files using the R package EdSurvey, which was developed for analyzing national and international large-scale assessment data with complex psychometric and sampling designs.

  • ​This interactive training course will introduce the concepts of unidimensional IRT models and provide instruction, demonstration, and hands-on opportunities of using the free R software to estimate commonly used IRT models. Participants will receive a discount code for Using R for Item Response Theory Model Applications, written by the course instructors.

  • Appropriate for graduate students and seasoned academics, this hands-on course will be a straightforward guide to helping participants begin to understand and overcome the psychological, emotional, and logistical hurdles that can get in their way of being productive writers. Specifically, this course will intertwine a discussion of the research underlying the ways academic writers often sabotage their success with practical strategies designed to help session participants build a healthier relationship with writing to ultimately write more with less pain.

  • ​The knowledge-based benefits resulting from the use of artificial intelligence, machine learning, and data science and visualization tools in education research remain conditioned on computer programing expertise. Democratizing Data Science (DDS), a new data analytics movement, frees these benefits by lifting computer programming restrictions and offering open software access to conduct qualitative and mixed method research. This course constitutes the first product released as part of the mission of DDS.

2021 AERA Virtual Annual Meeting Professional Development Courses

AERA 2020 Virtual Research Learning Series

Recent AERA Webinars

Annual Meeting Professional Development Courses

Improving Generalizations from Experiments: New methods
Instructors: Elizabeth Tipton, Teachers College, Columbia University; Larry V. Hedges, Northwestern University
Sensitivity Analysis: Quantifying the Discourse About Causal Inference
Instructors: Kenneth Frank, Michigan State University; Yun-jia Lo, Michigan State University; Michael Seltzer, University of California, Los Angeles; Min Sun, Virginia Polytechnic Institute and State University
How to Analyze Large-Scale Assessments Data from Matrix Booklet Sampling Design: Focus on Psychometrics behind and Hands-on Analysis Using Actual Sample Data
Faculty: Emmanuel Sikali, National Center for Education Statistics; Andrew Kolstad, National Center for Education Statistics; Young Yee Kim, American Institutes for Research
Using the National Longitudinal Surveys of Youth for Education Research
Faculty: Elizabeth Cooksey, The Ohio State University; Steve McClaskie, The Ohio State University