Supplemental funding from other government agencies has allowed LARC to complete additional CAST (Computer Assisted Speech Tool) advanced level online language tests in Filipino/Tagalog, German, Italian, Japanese, French, and Pashto, as well as in the original languages of Arabic, Persian Spanish, Iraqi, and Egyptian. Because the infrastructure of the test has been developed as a template, and because it functions reliably over networks, it is relatively inexpensive to develop new CASTS in additional languages, rendering items into cultural and linguistic equivalencies of the items vetted in the original test.
Over time, LARC testing researchers have found that perhaps the greatest value of the instrument is its ability to capture and store learners’ speech in an online, searchable database. These learner-generated speech acts respond to ACTFL OPI- and task-based prompts that were designed to elicit responses from online test takers. Tasks are targeted at approximately the advanced level, as that is the level that is needed for most professional functions, such as teaching a language or interpreting. The project originally involved the Defense Language Institute, the Center for Applied Linguistics (CAL), the American Council of Teaching of Foreign languages (ACTFL), LARC@ SDSU, and Brigham Young University, all of them institutions with significant experience in language testing. These partners have remained interested in and committed to the development of CAST in its next incarnation as a teacher training (NCATE) research and teacher training tool, and many continue to serve on LARC’s advisory board (Ray Clifford, Meg Malone, Jerry Larson, and Elvira Swender).
Perspective teachers of most world languages must have demonstrated advanced level proficiency (with Arabic and other ‘difficult’ languages being the exception, requiring only the intermediate high) as a condition of entry to professional teacher training. The burden is on the applicant to prove proficiency; however, many language tests are either too expensive, do not come with helpful, tailored feedback, or are unreliable measures of advanced level proficiency. Research on reliability of CAST tests (originally developed for Arabic and Spanish) has been conducted with NCATE and ACTFL, and robust positive correlations with CAST and the full OPI have emerged. Additional CAST tests developed by LARC attempt to render culturally and linguistically appropriate equivalencies of the accepted CAST items (reviewed by native speakers and ACTFL certified testers.). Teacher education schools have indicated that tests at intermediate and superior levels are also needed.
Based on the results of prior research and the establishment of a framework for the CAST by the Center for Applied Linguistics, several steps are needed and feasible: 1) testing for reliability and validity on the new advanced level tests that have or will been soon generated; 2) developing new CAST tests at the intermediate and superior levels, based on the CAL frameworks and using ACTFL consultants and/or ACTFL trained item writers; and 3) adaptation of the framework for languages of importance to our Title VI sister centers.
Because the technical frameworks of the CAST have been well established and vetted, adding new levels are very feasible. Building on CAST’s technological infrastructure, items can be added, reviewed, approved, and inserted into the database and available for the algorithm to select on demand. Nationally nor med language tests exist and are used in the field, largely due to Title VI and ACTFL efforts. The more difficult question is: what nationally-normed language tests are now available and is there a need for CAST in this arena, or is it best used as a feedback loop to students and their teachers?
In the new grant cycle, LARC will examine the range of tests currently available and the purposes that they serve. This research will be posted on LARC’s website. In the assessment area, we will also work with community college representatives as they generate Student Learning Objectives (SLOs), and this effort will assist us to articulate among the CCC (California Community College), the CSU (California State University), and the UC (University of California) systems. Finally, we have, over the past three years, co-sponsored a series of language testing symposia for National Resource Centers of Latin American Studies. We will continue to support joint projects with SDSU’s NRC, the Center for Latin American Studies, to examine approaches and constructs for training teachers to do effective evaluations for a variety of purposes. Without the infrastructure provided by the university computing services, the computer science graduate student team (6 members), and our national collaborators, we would not have had the same level of project support and virtually guaranteed results in this area.
In this project we also will work with the LRC at Penn State to co-host summer institutes in testing and in on podcasting the consultants’ contributions.
More Information on Each Project Area