- Assessment is an essential and continuous (ongoing) component of the teaching and learning processes. No skill is more important to an instructor than the ability to continuously analyze, appraise, and judge a student’s performance. The student looks to the instructor for guidance, suggestions for improvement, and encouragement. The instructor must gather the information needed to evaluate student progress throughout the course. This information helps to shape the learning process by guiding an instructor regarding what needs to be reinforced during instruction, as well as helping the instructor determine the readiness of the aviation student to move forward
- This chapter examines the instructor’s role in assessing levels of learning, describes methods of assessment, and discusses how to construct and conduct effective assessments. The techniques and methods described in this chapter apply as much to the aviation instructor in the classroom as to the aircraft maintenance instructor in the shop, or to the flight instructor in the aircraft or in the briefing area. Since each student is different and each learning situation is unique, the outcome may not be what the instructor expected. Whatever the outcome, the instructor must be able to assess student performance and convey this information to the student. To do so, the instructor utilizes several different types of assessment
- Most instructors and students are familiar with the term "grading." A more useful term is "assessment," which is the process of gathering measurable information to meet evaluation needs. The Latin root "assess" means "to judge, to sit beside." This term thus conveys the idea that assessment involves both judgment by the instructor and collaboration with the student during the evaluation stage
- This chapter presents and discusses two broad categories of assessment. The first is traditional assessment, which often involves the kind of written testing (e.g., multiple choice, matching) and grading that is most familiar to instructors and students. To achieve a passing score on a traditional assessment, the student usually has a set amount of time to recognize or reproduce memorized terms, formulas, or data. There is a single answer that is correct. Consequently, the traditional assessment is more likely to be used to judge, or evaluate, the student’s progress at the rote and understanding levels of learning
- The second category of assessment is authentic assessment. Authentic assessment requires the student to demonstrate not just rote and understanding, but also the application and correlation levels of learning. Authentic assessment generally requires the student to perform real-world tasks, and demonstrate a meaningful application of skills and competencies. In other words, the authentic assessment requires the student to exhibit in-depth knowledge by generating a solution instead of merely choosing a response
- In authentic assessment, there are specific performance criteria, or standards, that students know in advance of the actual assessment. The terms "criteria/criterion" and "standard" are often used interchangeably. They refer to the characteristics that define acceptable performance on a task. Another term used in association with authentic assessment is "rubric." A rubric is a guide used to score performance assessments in a reliable, fair, and valid manner. It is generally composed of dimensions for judging student performance, a scale for rating performances on each dimension, and standards of excellence for specified performance levels
- Whether traditional or authentic, an assessment can be either formal or informal. Formal assessments usually involve documentation, such as a quiz or written examination. They are used periodically throughout a course, as well as at the end of a course, to measure and document whether or not the course objectives have been met. Informal assessments, which can include verbal critique, generally occur as needed and are not part of the final grade
- Other terms associated with assessment include diagnostic, formative, and summative:
- Diagnostic assessments are used to assess student knowledge or skills prior to a course of instruction
- Formative assessments, which are not graded, are used as a wrap-up of the lesson and to set the stage for the next lesson. This type of assessment, which is limited to what transpired during that lesson, informs and guides the instructor on which areas to reinforce
- Summative assessments are used periodically throughout the training to measure how well learning has progressed to that point. For example, a chapter quiz or an end-of-course test can measure the student’s overall mastery of the training. These assessments are an integral part of the lesson, as well as the course of training
- Assessment is an essential and continuous (ongoing) component of the teaching and learning processes. An effective assessment provides critical information to both the instructor and the student. Both instructor and student need to know how well the student is progressing. A good assessment provides practical and specific feedback to students, including direction and guidance on how to raise their level of performance. Most importantly, a well-designed and effective assessment process contributes to the development of aeronautical decision-making and judgment skills by helping develop the student’s ability to evaluate his or her own knowledge and performance accurately
- A well-designed and effective assessment is also a very valuable tool for the instructor. By highlighting the areas in which a student’s performance is incorrect or inadequate, it helps the instructor see where more emphasis is needed. If, for example, several students falter when they reach the same step in a weight-and-balance problem, the instructor might recognize the need for a more detailed explanation, another demonstration of the step, or special emphasis in the assessment of subsequent performance
- In order to provide direction and raise the student’s level of performance, assessment must be factual, and it must be aligned with the completion standards of the lesson [Figure 5-1]
-
- The effective assessment is objective, and focused on student performance while ignoring instructor bias
- Sympathy or over-identification with a student, to such a degree that it influences objectivity, is known as "halo error"
- A conflict of personalities can also distort an opinion
If an assessment is to be objective, it must be honest; it must be based on the performance as it was, not as it could have been
- The instructor must evaluate the entire performance of a student in the context in which it is accomplished
- Sometimes a good student turns in a poor performance, and a poor student turns in a good one
- A friendly student may suddenly become hostile, or a hostile student may suddenly become friendly and cooperative
The instructor must fit the tone, technique, and content of the assessment to the occasion, as well as to the student
An assessment should be designed and executed so that the instructor can allow for variables
The ongoing challenge for the instructor is deciding what to say, what to omit, what to stress, and what to minimize at the proper moment
- The student must accept the instructor in order to accept his or her assessment willingly
- Students must have confidence in the instructor’s qualifications, teaching ability, sincerity, competence, and authority
- If the instructor does not have the opportunity to establish themselves before a formal assessment arises, however, the instructor’s manner, attitude, and familiarity with the subject at hand must serve this purpose
-
- Assessments must be presented fairly, with authority, conviction, sincerity, from a position of recognizable competence and never rely on their position to make an assessment acceptable
- A comprehensive assessment is not necessarily a long one, nor must it treat every aspect of the performance in detail. The instructor must decide whether the greater benefit comes from a discussion of a few major points or a number of minor points. The instructor might assess what most needs improvement, or only what the student can reasonably be expected to improve. An effective assessment covers strengths as well as weaknesses
- An assessment is pointless unless the student benefits from it. Praise for its own sake is of no value, but praise can be very effective in reinforcing and capitalizing on things that are done well, in order to inspire the student to improve in areas of lesser accomplishment. When identifying a mistake or weakness, the instructor must give positive guidance for correction. Negative comments that do not point toward improvement or a higher level of performance should be omitted from an assessment altogether
- An assessment must be organized. Almost any pattern is acceptable, as long as it is logical and makes sense to the student. An effective organizational pattern might be the sequence of the performance itself. Sometimes an assessment can profitably begin at the point at which a demonstration failed, and work backward through the steps that led to the failure. A success can be analyzed in similar fashion. Alternatively, a glaring deficiency can serve as the core of an assessment. Breaking the whole into parts, or building the parts into a whole, is another possible organizational approach
- An effective assessment reflects the instructor’s thoughtfulness toward the student’s need for self-esteem, recognition, and approval. The instructor must not minimize the inherent dignity and importance of the individual. Ridicule, anger, or fun at the expense of the student never has a place in assessment. While being straightforward and honest, the instructor should always respect the student’s personal feelings. For example, the instructor should try to deliver criticism in private
- The instructor’s comments and recommendations should be specific. Students cannot act on recommendations unless they know specifically what the recommendations are. A statement such as, "Your second weld wasn’t as good as your first," has little constructive value. Instead, the instructor should say why it was not as good, and offer suggestions on how to improve the weld. If the instructor has a clear, well-founded, and supportable idea in mind, it should be expressed with firmness and authority, and in terms that cannot be misunderstood. At the conclusion of an assessment, students should have no doubt about what they did well and what they did poorly and, most importantly, specifically how they can improve
- As defined earlier, traditional assessment generally refers to written testing, such as multiple choice, matching, true/false, fill in the blank, etc. Written assessments must typically be completed within a specific amount of time. There is a single, correct response for each item. The assessment, or test, assumes that all students should learn the same thing, and relies on rote memorization of facts. Responses are often machine scored, and offer little opportunity for a demonstration of the thought processes characteristic of critical thinking skills
- One shortcoming is that traditional assessment approaches are generally instructor centered, and that they measure performance against an empirical standard. In traditional assessment, fairly simple grading matrices such as shown in Figure 5-2 are used. The problem with this type of assessment has always been that a satisfactory grade for the first lesson may be an unsatisfactory on lesson number three
- Still, tests of this nature do have a place in the assessment hierarchy. Multiple choice, supply type, and other such tests are useful in assessing the student’s grasp of information, concepts, terms, processes, and rules—factual knowledge that forms the foundation needed for the student to advance to higher levels of learning
-
- Whether or not an instructor designs his or her own tests or uses commercially available test banks, it is important to know the components of an effective test. (Note: This section is intended to introduce basic concepts of written test design. Please see Appendix A for testing and test writing publications)
- A test is a set of questions, problems, or exercises intended to determine whether the student possesses a particular knowledge or skill. A test can consist of just one test item, but it usually consists of a number of test items. A test item measures a single objective, and calls for a single response. The test could be as simple as the correct answer to an essay question or as complex as completing a knowledge or practical test. Regardless of the underlying purpose, effective tests share certain characteristics. [Figure 5-3]
- Reliability is the degree to which test results are consistent with repeated measurements. If identical measurements are obtained every time a certain instrument is applied to a certain dimension, the instrument is considered reliable. The reliability of a written test is judged by whether it gives consistent measurement to a particular individual or group. Keep in mind, though, that knowledge, skills, and understanding can improve with subsequent attempts at taking the same test, because the first test serves as a learning device
- Validity is the extent to which a test measures what it is supposed to measure, and it is the most important consideration in test evaluation. The instructor must carefully consider whether the test actually measures what it is supposed to measure. To estimate validity, several instructors read the test critically and consider its content relative to the stated objectives of the instruction. Items that do not pertain directly to the objectives of the course should be modified or eliminated
- Usability refers to the functionality of tests. A usable written test is easy to give if it is printed in a type size large enough for students to read easily. The wording of both the directions for taking the test and of the test items needs to be clear and concise. Graphics, charts, and illustrations appropriate to the test items must be clearly drawn, and the test should be easily graded
- Objectivity describes singleness of scoring of a test. Essay questions provide an example of this principle. It is nearly impossible to prevent an instructor’s own knowledge and experience in the subject area, writing style, or grammar from affecting the grade awarded. Selection-type test items, such as true/false or multiple choice, are much easier to grade objectively
- Comprehensiveness is the degree to which a test measures the overall objectives. Suppose, for example, an AMT wants to measure the compression of an aircraft engine. Measuring compression on a single cylinder would not provide an indication of the entire engine. Similarly, a written test must sample an appropriate cross-section of the objectives of instruction. The instructor has to make certain the evaluation includes a representative and comprehensive sampling of the objectives of the course
- Discrimination is the degree to which a test distinguishes the difference between students. In classroom evaluation, a test must measure small differences in achievement in relation to the objectives of the course. A test constructed to identify the difference in the achievement of students has three features:
- A wide range of scores
- All levels of difficulty
- Items that distinguish between students with differing levels of achievement of the course objectives
- Please see the reference section for information on the advantages and disadvantages of multiple-choice, supply-type, and other written assessment instruments, as well as guidance on creating effective test items
- Authentic assessment is a type of assessment in which the student is asked to perform real-world tasks, and demonstrate a meaningful application of skills and competencies. Authentic assessment lies at the heart of training today’s aviation student to use critical thinking skills. Rather than selecting from predetermined responses, students must generate responses from skills and concepts they have learned. By using open-ended questions and established performance criteria, authentic assessment focuses on the learning process, enhances the development of real-world skills, encourages higher order thinking skills (HOTS), and teaches students to assess their own work and performance
-
- There are several aspects of effective authentic assessment. The first is the use of open-ended questions in what might be called a "collaborative critique," which is a form of student-centered grading. As described in the scenario that introduced this chapter, the instructor begins by using a four-step series of open-ended questions to guide the student through a complete self-assessment
- Replay—ask the student to verbally replay the flight or procedure. Listen for areas in which the instructor’s perceptions differ from the student’s perceptions, and discuss why they do not match. This approach gives the student a chance to validate his or her own perceptions, and it gives the instructor critical insight into his or her judgment abilities
- Reconstruct—the reconstruction stage encourages the student to learn by identifying the key things that he or she would have, could have, or should have done differently during the flight or procedure
- Reflect—insights come from investing perceptions and experiences with meaning, requiring reflection on the events. For example:
- What was the most important thing you learned today?
- What part of the session was easiest for you? What part was hardest?
- Did anything make you uncomfortable? If so, when did it occur?
- How would you assess your performance and your decisions?
- Did you perform in accordance with the PTS
- Redirect—the final step is to help the student relate lessons learned in this session to other experiences, and consider how they might help in future sessions. Questions:
- How does this experience relate to previous lessons?
- What might be done to mitigate a similar risk in a future situation?
- Which aspects of this experience might apply to future situations, and how?
- What personal minimums should be established, and what additional proficiency flying and/or training might be useful?
The purpose of the self-assessment is to stimulate growth in the student’s thought processes and, in turn, behaviors. The self-assessment is followed by an in-depth discussion between the instructor and the student, which compares the instructor’s assessment to the student’s self-assessment. Through this discussion, the instructor and the student jointly determine the student’s progress on a rubric. As explained earlier, a rubric is a guide for scoring performance assessments in a reliable, fair, and valid manner. It is generally composed of dimensions for judging student performance, a scale for rating performances on each dimension, and standards of excellence for specified performance levels
- The collaborative assessment process in student-centered grading uses two broad rubrics: one that assesses the student’s level of proficiency on skill-focused maneuvers or procedures, and one that assesses the student’s level of proficiency on single-pilot resource management (SRM), which is the cognitive or decision-making aspect of flight training
- The performance assessment dimensions for each type of rubric are as follows:
-
- Describe—at the completion of the scenario, the student is able to describe the physical characteristics and cognitive elements of the scenario activities, but needs assistance to execute the maneuver or procedure successfully
- Explain—at the completion of the scenario, the student is able to describe the scenario activity and understand the underlying concepts, principles, and procedures that comprise the activity, but needs assistance to execute the maneuver or procedure successfully
- Practice—at the completion of the scenario, the student is able to plan and execute the scenario. Coaching, instruction, and/or assistance will correct deviations and errors identified by the instructor
- Perform—at the completion of the scenario, the student is able to perform the activity without instructor assistance. The student will identify and correct errors and deviations in an expeditious manner. At no time will the successful completion of the activity be in doubt. ("Perform" is used to signify that the student is satisfactorily demonstrating proficiency in traditional piloting and systems operation skills)
- Not observed—any event not accomplished or required
- For example, a student can describe a landing and can tell the flight instructor about the physical characteristics and appearance of the landing. On a good day, with the wind straight down the runway, the student may be able to practice landings with some success while still functioning at the rote level of learning. However, on a gusty crosswind day the student needs a deeper level of understanding to adapt to the different conditions. If a student can explain all the basic physics associated with lift/drag and crosswind correction, he or she is more likely to practice successfully and eventually perform a landing under a wide variety of conditions
-
- Explain—the student can verbally identify, describe, and understand the risks inherent in the flight scenario, but needs to be prompted to identify risks and make decisions
- Practice—the student is able to identify, understand, and apply SRM principles to the actual flight situation. Coaching, instruction, and/or assistance quickly corrects minor deviations and errors identified by the instructor. The student is an active decision maker
- Manage-Decide—the student can correctly gather the most important data available both inside and outside the flight deck, identify possible courses of action, evaluate the risk inherent in each course of action, and make the appropriate decision. Instructor intervention is not required for the safe completion of the flight
- In SRM, the student may be able to describe basic SRM principles during the first flight. Later, he or she is able to explain how SRM applies to different scenarios that are presented on the ground and in the air. When the student actually begins to make quality decisions based on good SRM techniques, he or she earns a grade of manage-decide. The advantage of this type of grading is that both flight instructor and student know exactly where the student learning has progressed
- Let’s look at how the rubric in Figure 5-4 might be used in the flight training scenario at the beginning of this chapter. During the postflight debriefing, CFI Linda asks her student, Brian, to assess his performance for the day, using the Replay – Reconstruct – Reflect – Redirect guided discussions questions described in the Collaborative Assessment subsection. Based on this assessment, she and Brian discuss where Brian’s performance falls in the rubrics for maneuvers/procedures and SRM. This part of the assessment may be verbally discussed or, alternatively, Brian and Linda separately create an assessment sheet for each element of the flight
- When Brian studies the sheet, he finds "Describe, Explain, Practice, and Perform." He decides he was at the perform level since he had not made any mistakes. The flight scenario had been a two-leg Instrument Flight Rules (IFR) scenario to a busy class B airport about 60 miles to the east. Brian felt he had done well in keeping up with programming the GPS and MFD until he reached the approach phase. He had attempted to program the Instrument Landing System (ILS) for runway 7L and had actually flown part of the approach until air traffic control (ATC) asked him to execute a missed approach
- When he compares the sheet he has completed to Linda’s version, Brian discovers that most of their assessments appear to match. An exception is the item labeled "programming the approach." Here, where he had rated the item as "Perform," Linda had rated it as "Explain." During the ensuing discussion, Brian realizes that he had selected the correct approach, but he had not activated it. Before Linda could intervene, traffic dictated a go-around. Her "explain" designation tells Brian that he did not really understand how the GPS worked, and he agrees
- This approach to assessment has several key advantages. One is that it actively involves the student in the assessment process, and establishes the habit of healthy reflection and self-assessment that is critical to being a safe pilot. Another is that these grades are not self-esteem related, since they do not describe a recognized level of prestige (such as A+ or "Outstanding"), but rather a level of performance. The student cannot flunk a lesson. Instead, he or she can only fail to demonstrate a given level of flight and SRM skills
- Both instructors and students may initially be reluctant to use this method of assessment. Instructors may think it requires more time, when in fact it is merely a more structured, effective, and collaborative version of a traditional postflight critique. Also, instructors who learned in the more traditional assessment structure must be careful not to equate or force the dimensions of the rubric into the traditional grading mold of A through F. One way to avoid this temptation is to remember that evaluation should be progressive: the student should achieve a new level of learning during each lesson. For example, in flight one, the automation management area might be a "describe" item. By flight three, it is a "practice" item, and by flight five, it is a "manage-decide" item
- The student may be reluctant to self-assess if he or she has not had the chance to participate in such a process before. Therefore, the instructor may need to teach the student how to become an active participant in the collaborative assessment
- When deciding how to assess student progress, aviation instructors can follow a four-step process:
- First, determine level-of-learning objectives
- Second, list indicators of desired behaviors
- Third, establish criterion objectives
- Fourth, develop criterion-referenced test items
- This process is useful for tests that apply to the cognitive and affective domains of learning, and also can be used for skill testing in the psychomotor domain. The development process for criterion-referenced tests follows a general-to-specific pattern. [Figure 5-4]
- Instructors should be aware that authentic assessment may not be as useful as traditional assessment in the early phases of training, because the student does not have enough information about the concepts or knowledge to participate fully. As discussed in Chapter 2, The Learning Process, when exposed to a new topic, students first tend to acquire and memorize facts. As learning progresses, they begin to organize their knowledge to formulate an understanding of the things they have memorized. When students possess the knowledge needed to analyze, synthesize, and evaluate (i.e., application and correlation levels of learning), they can participate more fully in the assessment process
-
- The first step in developing an appropriate assessment is to state the individual objectives as general, level-of-learning objectives. The objectives should measure one of the learning levels of the cognitive, affective, or psychomotor domains described in chapter 2. The levels of cognitive learning include knowledge, comprehension, application, analysis, synthesis, and evaluation
- For the understanding level, an objective could be stated as, "Describe how to perform a compression test on an aircraft reciprocating engine." This objective requires a student to explain how to do a compression test, but not necessarily perform a compression test (application level). Further, the student would not be expected to compare the results of compression tests on different engines (application level), design a compression test for a different type of engine (correlation level), or interpret the results of the compression test (correlation level). A general level-of-learning objective is a good starting point for developing a test because it defines the scope of the learning task
-
- The second step is to list the indicators or samples of behavior that give the best indication of the achievement of the objective. Some level of learning objectives often cannot be directly measured. As a result, behaviors that can be measured are selected in order to give the best evidence of learning. For example, if the instructor is expecting the student to display the understanding level of learning on compression testing, some of the specific test question answers should describe appropriate tools and equipment, the proper equipment setup, appropriate safety procedures, and the steps used to obtain compression readings. The overall test must be comprehensive enough to give a true representation of the learning to be measured. It is not usually feasible to measure every aspect of a level of learning objective, but by carefully choosing samples of behavior, the instructor can obtain adequate evidence of learning
-
- The next step in the test development process is to define criterion (performance-based) objectives. In addition to the behavior expected, criterion objectives state the conditions under which the behavior is to be performed, and the criteria that must be met. If the instructor developed performance-based objectives during the creation of lesson plans, criterion objectives have already been formulated. The criterion objective provides the framework for developing the test items used to measure the level of learning objectives. In the compression test example, a criterion objective to measure the understanding level of learning might be stated as, "The student will demonstrate understanding of compression test procedures for reciprocating aircraft engines by completing a quiz with a minimum passing score of 70 percent"
-
- The last step is to develop criterion-referenced assessment items. The development of written test questions is covered in the reference section. While developing written test questions, the instructor should attempt to measure the behaviors described in the criterion objective(s). The questions in the exam for the compression test example should cover all of the areas necessary to give evidence of understanding the procedure. The results of the test (questions missed) identify areas that were not adequately covered
- Performance-based objectives serve as a reference for the development of test items. If the test is the pre-solo knowledge test, the objectives are for the student to understand the regulations, the local area, the aircraft type, and the procedures to be used. The test should measure the student’s knowledge in these specific areas. Individual instructors should develop their own tests to measure the progress of their students. If the test is to measure the readiness of a student to take a knowledge test, it should be based on the objectives of all the lessons the student has received
- Aviation training also involves performance tests for maneuvers or procedures. The flight instructor does not administer the practical test for a pilot certificate, nor does the aviation maintenance instructor administer the oral and practical exam for certification as an aviation maintenance technician (AMT). However, aviation instructors do get involved with the same skill or performance testing that is measured in these tests. Performance testing is desirable for evaluating training that involves an operation, a procedure, or a process. The job of the instructor is to prepare the student to take these tests. Therefore, each element of the practical test should be evaluated prior to sending an applicant for the practical exam
- Practical tests for maintenance technicians and pilots are criterion-referenced tests. The practical tests, defined in the Practical Test Standards (PTS), are criterion referenced because the objective is for all successful applicants to meet the high standards of knowledge, skill, and safety required by the regulations. The purpose of the PTS is to delineate the standards by which FAA inspectors, designated pilot examiners (DPEs), and designated maintenance examiners (DMEs) conduct tests for ratings and certificates. The standards are in accordance with the requirements of Title 14 of the Code of Federal Regulations (14 CFR) parts 61, 65, 91, and other FAA publications, including the Aeronautical Information Manual (AIM) and pertinent advisory circulars and handbooks
- The objective of the PTS is to ensure the certification of pilots and maintenance technicians at a high level of performance and proficiency, consistent with safety. The PTS for aeronautical certificates and ratings include areas of operation and tasks that reflect the requirements of the FAA publications mentioned above. Areas of operation define phases of the practical test arranged in a logical sequence within each standard. They usually begin with preflight preparation and end with postflight procedures. Tasks are titles of knowledge areas, flight procedures, or maneuvers appropriate to an area of operation. Included are references to the applicable regulations or publications. Private pilot applicants are evaluated in all tasks of each area of operation. Flight instructor applicants are evaluated on one or more tasks in each area of operation. In addition, certain tasks are required to be covered and are identified by notes immediately following the area of operation titles
- Since every task in the PTS may be covered on the practical test, the instructor must evaluate all of the tasks before recommending the maintenance technician or pilot applicant for the practical test. While this evaluation is not necessarily formal, it should adhere to criterion-referenced testing
- Used in conjunction with either traditional or authentic assessment, the critique is an instructor-to-student assessment. These methods can also be used either individually, or in a classroom setting
- As discussed earlier, the word critique sometimes has a negative connotation, and the instructor needs to avoid using this method as an opportunity to be overly critical of student performance. An effective critique considers good as well as bad performance, the individual parts, relationships of the individual parts, and the overall performance. A critique can and usually should be as varied in content as the performance being evaluated
- A critique may be oral, written, or both. It should come immediately after a student’s performance, while the details of the performance are easy to recall. An instructor may critique any activity a student performs or practices to improve skill, proficiency, and learning. A critique may be conducted privately or before the entire class. A critique presented before the entire class can be beneficial to every student in the classroom, as well as to the student who performed the exercise or assignment. In this case, however, the instructor should avoid embarrassing the student in front of the class
- There are several useful ways to conduct a critique
-
- The instructor leads a group discussion in an instructor/student critique in which members of the class are invited to offer criticism of a performance. This method should be controlled carefully and directed with a clear purpose. It should be organized, and not allowed to degenerate into a random free-for-all
-
- The instructor asks a student to lead the assessment in a student-led critique. The instructor can specify the pattern of organization and the techniques or can leave it to the discretion of the student leader. Because of the inexperience of the participants in the lesson area, student-led assessments may not be efficient, but they can generate student interest and learning and, on the whole, be effective
-
- For the small group critique, the class is divided into small groups, each assigned a specific area to analyze. Each group must present its findings to the class. It is desirable for the instructor to furnish the criteria and guidelines. The combined reports from the groups can result in a comprehensive assessment
-
- The instructor may require another student to present the entire assessment. A variation is for the instructor to ask a number of students questions about the manner and quality of performance. Discussion of the performance and of the assessment can often allow the group to accept more ownership of the ideas expressed. As with all assessments incorporating student participation, it is important that the instructor maintain firm control over the process
-
- A student critiques personal performance in a self-critique. Like all other methods, a self-critique must be controlled and supervised by the instructor
-
- A written critique has three advantages. First, the instructor can devote more time and thought to it than to an oral assessment in the classroom. Second, students can keep written assessments and refer to them whenever they wish. Third, when the instructor requires all students to write an assessment of a performance, the student-performer has the permanent record of the suggestions, recommendations, and opinions of all the other students. The disadvantage of a written assessment is that other members of the class do not benefit
- Whatever the type of critique, the instructor must resolve controversial issues and correct erroneous impressions. The instructor must make allowances for the students’ relative inexperience. Normally, the instructor should reserve time at the end of the student assessment to cover those areas that might have been omitted, not emphasized sufficiently, or considered worth repeating
-
- The most common means of assessment is direct or indirect oral questioning of students by the instructor. Questions may be loosely classified as fact questions and HOTS questions. The answer to a fact question is based on memory or recall. This type of question usually concerns who, what, when, and where. HOTS questions involve why or how, and require the student to combine knowledge of facts with an ability to analyze situations, solve problems, and arrive at conclusions
- Proper quizzing by the instructor can have a number of desirable results:
- Reveals the effectiveness of the instructor’s training methods
- Checks student retention of what has been learned
- Reviews material already presented to the student
- Can be used to retain student interest and stimulate thinking
- Emphasizes the important points of training
- Identifies points that need more emphasis
- Checks student comprehension of what has been learned
- Promotes active student participation, which is important to effective learning
-
- The instructor should devise and write pertinent questions in advance. One method is to place them in the lesson plan. Prepared questions merely serve as a framework, and as the lesson progresses, should be supplemented by such impromptu questions as the instructor considers appropriate. Objective questions have only one correct answer, while the answer to an open-ended HOTS question can be expressed in a variety of possible solutions
- To be effective, questions must:
- Apply to the subject of instruction
- Be brief and concise, but also clear and definite
- Be adapted to the ability, experience, and stage of training of the students
- Center on only one idea (limited to who, what, when, where, how, or why, not a combination)
- Present a challenge to the students
-
- Effective quizzing does not ever include yes/no questions such as "Do you understand?" or "Do you have any questions?" Instructors should also avoid the following types of questions:
- Puzzle—"What is the first action you should take if a conventional gear airplane with a weak right brake is swerving left in a right crosswind during a full flap, power-on wheel landing?"
- Oversize—"What do you do before beginning an engine overhaul?"
- Toss-up—"In an emergency, should you squawk 7700 or pick a landing spot?"
- Bewilderment—"In reading the altimeter—you know you set a sensitive altimeter for the nearest station pressure—if you take temperature into account, as when flying from a cold air mass through a warm front, what precaution should you take when in a mountainous area?"
- Trick questions—these questions cause the students to develop the feeling that they are engaged in a battle of wits with the instructor, and the whole significance of the subject of the instruction involved is lost. An example of a trick question would be one in which the response options are 1, 2, 3, and 4, but they are placed in the following form
- Irrelevant questions—diversions that introduce only unrelated facts and thoughts and slow the student’s progress. Questions unrelated to the test topics are not helpful in evaluating the student’s knowledge of the subject at hand. An example of an irrelevant question would be to ask a question about tire inflation during a test on the timing of magnetos
-
- Tips for responding effectively to student questions, especially in a classroom setting:
- Be sure that you clearly understand the question before attempting to answer
- Display interest in the student’s question and frame an answer that is as direct and accurate as possible
- After responding, determine whether or not the student is satisfied with the answer
- Sometimes it is unwise to introduce considerations more complicated or advanced than necessary to completely answer a student’s question at the current point in training. In this case, the instructor should carefully explain to the student that the question was good and pertinent, but that a detailed answer would, at this time, unnecessarily complicate the learning tasks. The instructor should invite the student to reintroduce the question later at the appropriate point in training
- Occasionally, a student asks a question that the instructor cannot answer. In such cases, the instructor should freely admit not knowing the answer, but should promise to get the answer or, if practicable, offer to help the student look it up in available references
- Students look to instructors for guidance
- Keep students informed of their progress
- Conducted immediately after student performance with fresh details
- Can be re-learning a topic previously forgotten or not practiced
- Common Misconceptions:
- Critiques are a step in the learning process, not grading
- Critiques are not necessarily negative
- Critiques:
- Comprehensive:
- Instructor must decide whether the greater benefit will come from a discussion of a few major points or a number of minor points
- Objective:
- Facts, not opinions, honest
- Organized:
- Must follow some pattern to maintain impact
- Breaking the whole into parts or building the parts into a whole has strong possibilities
- Flexible:
- Within context, tailored to each student, proper for the moment
- Acceptable:
- Must "accept" instructor first (trust, confidence)
- Constructive:
- Can be used to inspire a student
- Positive guidance
- Thoughtful:
- Think of how the student will react
- Specific:
- Must focus on something concrete
- Clear, concise recommendations
- Instructor/Student Critique:
- Student-led Critique:
- Students lead critique
- Not efficient because student lacks experience
- Small Group Critique:
- Small groups evaluate a specific area
- Instructor must furnish guidelines
- Individual Student Critique by Another Student:
- Different points of views
- Self-Critique:
- Instructor must not leave controversial issues unresolved
- Make allowance for students inexperience
- Written Critique:
- Instructor can devote more time and thought
- Student can keep written critiques
- The student has the permanent record of the suggestions to others
- Do not extend critique beyond its scheduled time
- Avoid trying to cover too much
- Allow time for a summary of critique
- Avoid absolute statements
- Avoid controversies with the class
- Never allow yourself to be maneuvered into the unpleasant position of defending criticism
- Make certain that it is consistent with the oral portion
- Oral Quizzing:
- Reveals effectiveness of the instructors training procedures
- Checks the student's retention of what has been learned
- Reviews material already covered
- Can be used to retain the student's interest and stimulating thinking
- Emphasizes important points of training
- Identifies points that need more emphasis
- Checks the student's comprehension
- Promotes active student participation
- Effective Questions:
- Apply to subject
- Brief and Concise but clear and definite
- Should present a challenge
- Effective Answers:
- Answer directly and accurately
- Not complicated
- Admit not knowing answers
- Ineffective Questions:
- "Do you understand?"
- "Do you have any questions?"
- Puzzle - complicated
- Oversize - too much material
- Toss-up - subjective
- Bewilderment - confusion
- Trick questions
- Irrelevant questions
- Written Tests:
- Characteristics of a Good Test:
- Reliability: consistent results
- Validity: measures what it is supposed to
- Usability: easy to give, easy to read, wording is clear
- Comprehensiveness: liberally samples what is being measured
- Discrimination: measures small differences in achievement between students
- Test Development:
- Determine Levels-of-Learning Objectives:
- Develop a baseline to start and cover
- List Indicators/Samples of Desired Behavior:
- Best responses for evidence of learning
- Establish Criterion Objectives:
- Develop Criterion-Referenced Test Items:
- Norm-referenced testing: measures students against others
- Criterion-referenced testing: Practical tests
- Two Types:
- Supply-type:
- Selection Type:
- True/False:
- True falls statements should only test one idea
- Statements must be entire true or entirely false
- Avoid the use of negatives
- Multiple Choice:
- Unique solution
- Several pertinent solutions
- Can be incomplete sentences
- Matching
- Based on objectives and goals established
- Each item should test an important concept or idea
- Must be stated so that everyone who is competition would agree on the correct response
- Understandable
- Diagrams should be included when necessary
- Solution should demand knowledge of the subject
- This chapter has offered the aviation instructor techniques and methods for assessing how, what, and how well a student is learning. Well-designed assessments define what is worth knowing, thereby improving student learning. Since today’s students want to know the criteria by which they are assessed, as well as practical and specific feedback, it is important for aviation instructors to be familiar with the different types of assessments available for monitoring student progress throughout a course of training, and how to select the most appropriate assessment method
- Flight instructors may also utilize WingX Rewind for use as a flight analysis tool
- The AOPA offers flight guides for instructors