Selasa, 05 Januari 2016


Name                 : Rezki Firdaus
Student’s ID      : 1407335
Topic                 : Summary; Part  Four: Assessment in ELT—Basic Concept, Test Development,  
                           and  Issues (Basic concept in Test Development, Testing and Evaluation, Language 
                           Assessment: Practical Classroom Applications Assessment, Assesment and some 
                           Research on ELT)
Date                : November 25th, 2014

Basic concept in Test Development
Testing students for what they have learnt is needed to know whether they really understand what the teachers teach. It is a tool used to know the students’ progress or achievement during teaching and learning processes. Conducting a test is a part of syllabus. The role of the test can be used both as the students’ assessment and evaluation. Students’ assessment means that it is conducted to measure the students’ ability; evaluation means that the result of the students’ test score is one of the factors showing the success of the teaching and learning process. When most of the students can pass the test, it can be concluded that the students obtain the learning objective of the learning process well. In the other hand, if fewer students pass the exam, the teachers need to think if there is specific problem related to the students, the teaching technique, even the design of the test itself.
How the teachers design a test influences the students’ test score. Brown (2001) purposes four aspects considered in designing a test, namely practicality, reliability, validity, and authenticity. The teachers need to make sure that the materials involved in the test are based on what they have learnt, the language used in the test is suitable for the students’ level, and the language instruction is clear and understandable.
Testing the students can be conducted in the middle of the semester, at the end of the semester or even every the end of the unit; it depends on the teachers’ purpose in conducting a test. Based on the purpose, test consists of several types. Brown (2001) divides the tests to be proficiency tests, diagnostic tests, placement tests, achievement tests, and aptitude tests. These kinds of tests have different purposes and criteria. These tests may involve four language skills, such as writing skill, listening skill, reading skill, and speaking skill.
A Test; the Meaning of A Test, test is a method of measuring a person’s ability or knowledge in a given domain (Brown, 2001, p. 384). It means that a test is conducted to know the ability of the students’ mastery of a certain lesson. The students are informed before they do the test. They are given to know the time when and where it will be held and how the test will be. It is hoped that the students really prepare for doing the test so that they can pass the passing grade.
The Essential Components of A Test, a test contains several essential components that should be met together. Brown (2001, 384-385) purposes four essential components of a test. Those are as follow; A test is a method, used by the teachers to know their students’ ability. There are certain procedures and specific techniques involved in the test. A test is procedural and technical. A test has the purpose of measuring, the use of the test is for measuring. Most of the test result is in the form of numbers. There are criteria for ranking those numbers. Someone can measure herself after knowing her test score. She can judge herself about her position in a class by looking at her ranking she gets.
A test measures a person’s ability, the purpose of conducting a test is for measuring someone’s ability or knowledge. The result of the test is used to show someone’ competence in certain field. As the example is when someone takes TOEFL test. It means that she wants to know her language proficiency by taking TOEFL test. A test measures a given domain, a test measures specific domain involving certain criteria. As the example is, in TOEFL test, in the section two in part B, the students are asked to choose the most inappropriate word in a sentence. In this part, the domain that is being measured is the students’ written ability. The ability of the students to analyze which word is inappropriate involves certain macro and micro-skills of writing.
Criteria for Testing A Test, it is not easy to design a test since there are several aspects that must be considered. Some tests seem to be reliable but in fact, the content of the test doesn’t measure what it is supposed to be measured. Then, the test is not valid. Students’ test score is also influenced by how the test functions as the real test. It is suggested that the teachers need to test the test before it is given to the students. Brown (2001, 386-388) suggests that there are three criteria for testing a test. Here are those criteria; Practicality, refers to the means of financial limitations, time constraints, ease of administration, and scoring and interpretation (Brown, 2001, p. 386). It means that a practical test should not take much money. If the teachers give the students a written test where the students need to pay back the teachers, so the test is impractical. Besides, the teachers also need to make rubric score while they are designing a test. It means that the rubric really involves criteria for measuring the ability tested, as the example. Reliability, a reliable test is consistent and dependable (Brown, 2001, p. 386). Testing a language deals with people. It is difficult to make the test as reliable as possible since condition of test takers can be influenced from many factors. Validity, is the degree to which the test actually measures what it is intended to measure (Brown, 2001, p. 387). Validity is the most important principle that should be considered before making a test. The teachers need to make sure what students’ skill that will be measured. If it is listening, the students need to listen to a certain recording; if it is speaking, the students need to speak; if it is reading, the students need to have reading activities; or if it is writing, then the students need to produce a text.
Content validity, a test is said to have content validity if its content constitutes a representative sample of the language skills, structures, etc. with which it is meant to be concerned (Hughes, 2003, p. 26). The example of content validity is when the teachers conduct a speaking test in which the topic is about descriptive text. We can consider if the content of the test really asks the students to describe something.
Face validity, a test is said to have face validity if it looks as if it measures what it is supposed to measure (Hughes, 2003, p. 33). When the teachers want to know students’ ability in pronunciation, the teachers need to ask the students to speak one by one. It is need not to ask the students only to write down the phonetic symbol without any practice to use it orally. The students need to get relevant experience of what is being measured.
Construct validity is when the tests tap into the theoretical construct as it has been defined (Brown, 2001, p.389). Constructs may or may not be directly or empirically measured-their verification often requires inferential data (Brown, 2004, p. 25). From the two theories above, we can say that there should be theories from some experts in testing language ability.
Authenticity in a test may be present in following ways: The language in the test is as natural as possible, Items are contextualized rather than isolated, Topics are meaningful (relevant, interesting) for the learner, Some thematic organization to items is provided, such as through a story line or episode, Tasks represent, or closely approximate, real-world tasks (Brown, 2004, p. 28). The teachers need to select topic appropriate to the students’ context. The topic should be relevant to the students’ daily life.
Kinds of Tests, there are five kinds of test, namely proficiency tests, diagnostic tests, placement tests, achievement tests, and aptitude tests. Those are described as follows Brown (2001, 390-392). Proficiency Tests are designed to measure people’s ability in a language, regardless of any training they may have had in that language (Hughes, 2003, p. 11). It measures how well they master a language. The score is in the form of number that will show their level. The types of proficiency tests are such IELTS and TOEFL ITP or IBT.
Proficiency tests have traditionally consisted of standardized multiple-choice items on grammar, vocabulary, reading comprehension, and aural comprehension (Brown, 2004, p. 44). In the proficiency tests, the students are tested their language proficiency. It involves four skills of language; those are reading skill, speaking skill, listening skill, and writing skill.
Diagnostic Tests is designed to diagnose a particular aspect of a language (Brown, 2001, p. 390). It can be considered that this test is conducted after the students have learnt a certain topic. As the example is after the students have learnt about types of conditional sentences, the teachers will conduct a diagnostic test to know which type of conditional sentences the students have not understood.
Placement Tests are intended to provide information that will help to place students at the stage (or in the part) of the teaching programme most appropriate to their abilities (Hughes, 2003, p. 16). Besides, Brown (2001, p. 391) adds that a placement test typically includes a sampling of material to be covered in the curriculum (that is, it has content validity), and it thereby provides an indication of the point at which the student will find a level or class to be neither too easy or nor too difficult, but appropriately challenging.
Achievement Tests are directly related to language course, their purpose being to establish how successful individual students, groups of students, or the course themselves have been in achieving objectives (Hughes, 2003, p. 13). In these tests, the students are provided tests measuring their understanding level of the lesson.
Aptitude Tests is designed to measure a person’s capacity or general ability to learn a foreign language and to be successful in that undertaking (Brown, 2001, 391). This kind of test is used to predict if someone will be succeed in learning a language. This test doesn’t exist anymore since through the learning process the students will make progress.
Oral Proficiency Testing, Testing speaking ability is the most difficult part in conducting a test. A kind of oral proficiency testing used to test speaking ability is OPI (Oral Proficiency Interview) that is carefully designed to elicit pronunciation, fluency/integrative ability, sociolinguistic and cultural knowledge, grammar, and vocabulary (Brown, 2001, p. 396).
Critical Language Testing, One of the problems of critical language testing surrounds the widespread conviction that standardized tests designed by reputable test manufacturers (such as the Educational Testing Service among the world’s largest deliverers of large-scale tests for admissions to programs in institutions of higher education) are infallible in their predictive validity (Brown, 2001, p. 398). It means that a test will seem to be valid from the test designers’ perspective only.

Testing and Evaluation
For some reasons, teacher and other educational professionals spend a lot of time testing, evaluating and assessing students. Sometimes this is to measure the students’ abilities to see if they can enter a course or institution. Sometimes it is to see how well they getting on. Sometimes it is because the students themselves want a qualification. Sometimes it is formal and public, and sometimes it is informal and takes place in day-to-day lessons.
It is needed to make clear difference between formative and summative assessment. Summative assessment is the kind of measurement that takes place to round things off or make a one-off measurement. Such tests include the end-of-year tests that the students take. On the other hand, formative assessment is the kind of feedback teachers give students as a course progressing and which, as a result, may help them to improve their performance.
Different Types of Testing, Placement tests – it is designed to provide information of the students’ abilities that will help to place students in the right classes. Diagnostic tests – it is designed to show how good a students’ English is in relation to a previously agreed system of levels. It is also can be used to expose learners’ difficulties. Progress or achievement tests – it is designed to measure learners’ language and skill progress in relation to the syllabus they have been following. Proficiency tests – it is designed to give a general picture of a students’ knowledge and ability (rather than measure progress). Portfolio assessment – it is designed to provide evidence of students’ effort in learning process. It helps students become more autonomous and it can foster student reflection and help them to self monitor their own learning.
Characteristics of A Good Test, Validity – a test is valid if it is supposed to test and produces similar results to some other measure. A test is only valid if there is validity in the way it is remarked. Reliability – a good test should give consistent results. For example, if the same group of students took the same test twice within two days, they should get the same result on each occasion.
Types of test item; Direct and Indirect Test Items, A test item is direct if it asked candidates to perform the communicative skill which is being tested. On the other hand, indirect test items try to measure students’ knowledge and ability by getting at what lies beneath their receptive and productive skills. Another distinction needs to be made between discrete-point testing and integrative testing. Whereas discrete-point testing only tests one thing at a time, integrative test items expect students to use a variety of language at any one given time. In many proficiency tests where students sit a number of different papers, there is a mixture of direct and indirect, discrete-point and integrative testing.
Indirect Test Item Types; Multiple-choice questions was considered to be ideal test instrument for measuring students’ knowledge of grammar and vocabulary. However, there are a number of problems with MCQ. First, it is extremely difficult to write well. Second, it is possible to train students so that their MCQ abilities are enhanced, this may not actually improve their English. MCQ is still widely used, but though MCQs score highly in terms of practicality and scorer reliability, their validity and overall reliability are suspect.
Cloze procedures, like a perfect test instrument, since, because of the randomness of the deleted words, anything may be tested, and therefore it becomes more integrative in its reach. However, it turns out that the actual score a students get depends on the particular words that are deleted, rather than on any general English knowledge. Despite such problems of reliability, cloze is too useful technique to abandon altogether because it is clear that supplying the correct word for a blank does imply an understanding of context and knowledge of that word and how it operates. They are useful too as part of a test battery in either achievement or proficiency tests. Transformation and paraphrase are a common test item asks candidates to re-write sentences in a slightly different form, retaining the exact meaning of the original. In order to complete the item successfully, the student has to understand the first sentence and then know how to construct an equivalent which is grammatically possible. Sentence re-ordering gets students to put words in the right order to make appropriate sentences tells us quite a lot about their underlying knowledge of syntax and lexico-grammatical elements. It is fairly easy to write, though it is not always possible to ensure only one correct order.
Direct Test Item Types; for direct test items to achieve validity and to be reliable, test designers need to do the following: Create a ‘level playing field’, Teachers and candidates would almost certainly complain about some essay questions which contain certain topic; they prefer to have questions with general knowledge. In this case, they want to be in a situation in which everyone has the same chance of succeeds. Replicate real-life interaction, in real life when people speak or write, they generally do so with some real purpose. Yet traditional writing tests have often been based exclusively on general essay questions, and speaking tests have often included hypothetical questions about what candidates might say if they happened to be in a certain situation.
The following direct test item types are a few of the many which attempt to meet the criteria we have mentioned above: Speaking – an interviewer questions a candidate about themselves, role-play activities where candidates perform tasks such as introduction or ringing a theatre to book tickets, information-gap activities where a candidate has to find out information, decision-making activities such as showing paired candidates ten photos of people and asking them to put them in order of the best and worst dress, etc. Writing – writing compositions and stories, transactional letters where candidates reply to a job advertisement, information leaflets about the students’ school, newspaper articles about a recent event, etc. Reading – multiple-choice questions to test comprehension of a text, matching written descriptions with pictures of the items or procedure they describe, choosing the best summary of a paragraph, matching jumbled headings with paragraphs, inserting sentences provided in the correct place in the text, etc. Listening – completing charts with facts and figures from the listening text, identifying which of a number object is being described, identifying who says what, following directions on a map and identifying the correct house or place.
Writing and Marking Tests, Writing Tests, Before designing a test and then giving it to a group of students, there are a number of things we need to do: Assess the test situation – we need to remind ourselves of the context in which the test takes place. We have to decide how much time should be given to the test-taking, when and where it will take place, and how much time there is for marking. Decide what to test – we have to list what we want to include in our test: what kinds of skills, what kind of topics and situation are appropriate. Balance the elements – balancing elements involves estimating how long we want each sections of the test to take and then writing test items within those time constraints. Weight the scores – our students’ success or failure depend upon how many marks are given to each section of the test. Make the test work – it is absolutely vital that we try out the tests on colleagues and other students before administering them to real candidates.
Marking Tests, There are a number of solutions to scorer subjectivity problem: Training – if scorers have been seen examples of scripts at various different levels and discussed what marks should be given, then their marking id likely to be less erratic than if they come to the task fresh. More than one scorer – the more people he look at a script, the greater the chance that its true worth will be located somewhere between the various score given. Global assessment scales – a way of specifying scores that can be given to productive skill work is to create ‘pre-defined descriptions of performance’. However, perhaps the description does not exactly match the student who is speaking, as would be the case where he or she had very poor pronunciation but was nevertheless grammatically accurate. Analytic profiles – marking gets more reliable when a student’s performance is analyzed in much greater detail. Instead of just a general assessment, mars are awarded for different elements. Scoring and interacting during oral tests – scorer reliability in oral tests is helped out by separating the role of scorer from the role interlocutor.
Teaching for Tests, the effect of testing on teaching and learning is known as backwash or washback. This refers to the fact that since teachers quite reasonably want their students to pass the tests and exams they are going to take, their teaching becomes dominated by the test and, especially, by the items that are in it. However, good exam-preparation teachers need to familiarize themselves with the tests their students are taking, and they need to be able to answer their students’ concerns and worries. Within this context there are a number of things we can do in an exam class: Train for test types – we can show the various test types and ask the students what each item is testing so that they are clear about what is required. Discuss general exam skills – most students benefit from being reminded about general test and exam skills, without which much of work they do will be wasted. Do practice tests – students need a chance to practice taking the test or exam so that they get a feel for the experience, especially with regard to issues such as pacing. Have fun – there are a number of ways of having fun with tests and exams. Ignore the test – when we are preparing students for exam, we need to ignore the exam from time to time so that we have opportunities to work on general language issues, and so that students can take part in the kind of motivating activities that are appropriate for all English lessons.

Language Assessment: Practical Classroom Application Assessment
Assessment can be categorized into two big categories, informal and formal assessment. Informal assessment can take a number of forms starting with incidental, unplanned comments and responses, along with coaching, and other impromptu feedback to the students’ performance. Formal assessments, on the other hand, are exercises or procedures specifically designed to tap into a storehouse of skills and knowledge.
Based on its function, assessment can be formative or summative. Brown (2004: 6) states that formative assessment is done in the process of forming students’ competence and skills with the goal of helping them to continue that growth process. Summative assessment, on the other hand, aims to measure, or summarize, what a student has grasped, and typically occurs at the end of a course or unit of instruction. A summation of what a student has learned implies looking back and taking stock of how well that student has accomplished objectives, but doesn’t necessarily point the way to future progress. Final exams in a course and general proficiency exams are examples of summative assessment. They are formal assessment.
Recent Developments In Classroom Testing, One of the factors that have influenced the development of classroom testing is new views on intelligence. According to Brown (2001) as Gardner has divided intelligence into categories, the assessment which previously was exclusively reliance on timed, discrete-point, analytical in measuring language, nowadays it has tested not only cognitive but also interpersonal, creative, communicative, interactive, skills. The categories of intelligence by Gardner are: 1. Linguistic intelligence, 2. Logical-mathematical intelligence, 3. Spatial intelligence, 4. Musical intelligence, 5. Bodily-kinesthetic intelligence, 6. Interpersonal intelligence, 7. Intrapersonal intelligence.
The next recent development mentioned in Brown’s book is performance-based testing. Performance-based testing can be in the form of open-ended problems, hands-on project, student portfolios, experiments, labs, essay writing, and group projects. The advantage of performance-based testing is it has higher validity.
Interactive language tests are constructed in the spirit Gardner’s and Sternberg’s theories of intelligence as students are assessed in the process of creatively interacting with others. Students can be actively involved and interested participants when their task is not restricted to providing the one and only correct answer.
Alternative assessment was placed in the last point of recent developments in classroom testing. Such innovations in language classroom testing above have lead into alternative assessment.
Principles For Designing Effective Classroom Test, The first principle proposed by Brown was about strategies for test-takers. He divided the strategies into three parts: before, during, and after the test. Before the test, the teachers suggested to do these several actions: Give students all the information you can about the test, Encourage students to do a systemic review of material, Give them practise tests or exercises, Facilitate formation of a study group, Caution students to get a good night’s rest before the test, Remind students to get to the classroom early. During the test, the teacher should consider these actions: Tell students to quickly look over the whole test in order to get a good grasp of its different parts, Remind them to mentally figure out how much time they will need for each part, Advise them to concentrate as carefully as possible, Alert students a few minutes before the end of the class period so that they can proofread their answer, catch careless errors, and still finish on time. After the test, the teacher should follow these actions: When you return the test, include feedback on specific things the student did well, what he or she did not well and if possible the reasons for such judgment on your part, Advise the student to pay careful attention in class to whatever you say about the test result, Encourage questions from students, Advise students to make a plan to pay special attention in the future to points that they are weak on.
Second principle is related with face validity. Brown (2001) has declared face validity as validity that is seen from students’ perspective. To promote such students’ perception, Brown (2001) suggested the teacher to pay attention to several these things: a carefully constructed, well-thought-out format, a test that is clearly doable within the allotted time limit, items are clear uncomplicated, directions that are crystal clear, tasks that are familiar and relate to their course work, a difficulty level that is appropriate for your students.
Third principle is authenticity. to make a test authentic the teacher has to make the test in the following ways: The language in the test is as natural as possible, Items are contextualized rather than isolated, Topics are meaningful (relevant, interesting) for the learner, Some thematic organization to items is provided, such as through a story line or episode, Tasks represent, or closely approximate, real-world tasks.
The fourth principle given by Brown was Washback. Washback ….is the benefit that the tests offer to learning.” (Brown 2001: 410).

Some Practical Steps to Test Construction, Alternative Assessment Option, The followings are some practical steps in constructing classroom test based on Brown (2001): Test toward clear, unambiguous objectives; List everything that the teachers think their students should know or be able to do base on the material the students are responsible for, From your objectives, draw up test specification; The specifications give an indication of (a) which of the topics (objectives) will be covered, (b)what the items types will be, (c)how many items will be in each section, (d)how much time is allocated for each, Draft your test, Revise your test, Final-edit and type the test, Utilize your feedback after administering the test, Work for washback.
Alternative Assessment Options, Teachers and students were becoming aware of the shortcomings of traditional standardized tests. They proposed to assemble additional measures of students, such as portfolios, journals, observations, self assessments, peer assessments, and the like, in an effort to triangulate data about students. Alternatives assessment options: Self- and Peer assessment, self-assessment comes from a number of well established principles of second language acquisition which is autonomy is one of the primary foundation stones of successful learning. Journals, the categories or purposes in journal writing, such as the following: language-learning logs, grammar journal, responses to readings, strategies-based learning logs, self-assessment reflections, diaries of attitudes, feelings, and other affective factors , acculturation logs. Most classroom-oriented journals are what have now come to be known as dialogue journals. Conferences, becomes a standard part of the process approach to teaching writing, in which the teacher, in a conversation about a draft, facilitates the improvement of the written work. Portfolios, as cited in Brown (2001) Genesee and Upshur define a Portfolio as a purposeful collection of students’ work that demonstrates to students and others their efforts, progress, and achievement in given areas. Portfolios include materials such as: essays and compositions in draft and final forms; reports, project outlines; poetry and creative prose; artwork, photos, newspaper or magazine clippings; audio and/or video recordings of presentations, demonstrations, etc.; journals, diaries, and other personal reflections;      tests, test scores, and written homework exercises; notes on lectures; and self and peer-assessments-comments, evaluations, and checklists.
Cooperative test construction, this assessment suggests the students construct their own test items. Assessment and Teaching, In this part Brown try to remind teachers that Assessment and Teaching are Partners in the Learning Process. He also mentions several benefits of Classroom Assessment: Periodic assessment, both formal and informal, can increase motivation as they serve as milestone of student progress, Assessments can spur learners to set goals for themselves, Assessments encourage retention of information through the feedback they give on learners’ competence, Assessments can provide a sense of periodic closure to various units and modules of a curriculum, Assessments can encourage students’ self-evaluation of their progress, Assessments can promote student autonomy as they confirm areas of strength and areas needing further work, Assessments can aid in evaluating teaching effectiveness.    

Assessment and some Research on ELT
Assessment is a variety ways of collecting information on a learner’s language ability or achievement. In measuring how far students can get along learning process, it is important to test the students as a part of assessment. Assessment is a central element in curriculum design: it is the critical link between learning outcomes, content and learning and teaching activities. The testing and assessment usuallay used interchargeably, the latter is an umbrella term encompassing measurement administered on a one of basis of tests, qualitative methods of monitoring and recording students learning such as observation, stimulations or project work. Actually assessment also familiar as a part of evaluation which take a part with overall language programme and not only with what have learn by individual learner.
There are 4 kinds of assessment; 1) Proficiency assessment, the assessment of general language abilities acquired by learner independent of course study (Carter., R, Nunan., D, 2001:137), 2) Assessment of achievement, what students has learned in relation to a particular course or curriculum (Carter., R, Nunan., D, 2001:137), 3) Formative assessment, assessment which carried out by teachers during the learning process with the aim of using the result to improve instruction (Carter., R, Nunan., D, 2001:137), 4) Summative assessment, assessment which take at the end of a course, term or school year – often for purpose of providing aggregated information on programme outcomes to educational authorities (Carter., R, Nunan., D, 2001:137).
For the interpretation of assessment devided into two part such: Norm referenced: assessment rank learners in relation to each other. Criterion referenced occurs when learners’ performance is described in relation to an explicit stated standard (Carter., R, Nunan., D, 2001:137).
Furthermore, based on what have been discussed by Geoff Brindley cited in Carter., R, Nunan., D, 2001: 137-138 there are three types of validity; 1) construct validity, the extent to which the content of the test/assessment reflect current theoritical understanding of skill (s) being assessed, 2) content validity, whether it represent an adequate sample of ability, and 3) criterion-related validity, the extent to which the results corelated with other independent measure of ability.
Assessment is caried out to collect information on learners’ language proficiency and achievement that can be used by stakeholders in language learning programmes for various purposes. These purposes included; selection, certification, accountability, diagnostic, instructional decision-making, and motivation (cited in Carter., R, Nunan., D, 2001:138).
Under the influence of structural linguistics, language test were design to assess learners’ mastery of different areas of linguistic system such as phoneme descrimination, grammatical knowledge and vocabulary (Carter., R, Nunan., D, 2001:138). To maximise reliability, test often used objective testing formats such as multiple choice and included large numbers of items. Discrete item test provided no information on learners’ ability to use language for communicative purposes (Carter., R, Nunan., D, 2001:139). It began to look for other more global forms of assessment which were ablr to tap the use of language skills under normal contextual constarints. Integrative test, sush as cloze test, and dictation which focus on learners to use linguistic and contextual knowledge to reconstitute the meaning of spoken or written texts (Carter., R, Nunan., D, 2001:139).
In language assessment there are two major point should be focus on (by Geoff Brindley in Carter., R, Nunan., D, 2001:139) 1) the key question of how to define language ability and 2) self assessment of language ability. To answer this question, it is necessary to describe the nature of the abilities being assessed, known as contruct definition. Assessment not only assess a language performance but also need to meet the requirement of validity and realibility and also practically feasible. In a direct assessment of language performace is time consuming and particularly individualised testing. So, if teachers are required to construct and administer their own assessment tasks. It is crucial to provide adequate support and establish system for ensuring the quality of assessment tool used (Bottomley et al. 1994; Brindley 1998a; Geof Brindley in Carter., R, Nunan., D, 2001: 141)

A.    Bibliography

Brown, H.Douglas. 2001. Teaching by Principles. An Interactive Approach to          Language Pedagogy. Englewood Cliffs:  Prentice Hall.


Nunan, D., Carter, R. 2001. The Cambridge guide to Teaching English to      Speakers of Other Languages. New York: Cambridge University Press

Harmer, Jeremy. 2007b. How to Teach English. China: Pearson Education    Limited.

0 komentar:

Posting Komentar