Monday, April 9, 2012

JCW Elementary - Robyn Allphin - NISD


1.     Describe the purposes for and various stages of formative evaluation of technology plan. (Dick & Carey 1990)

The main purpose of formative evaluations is to revise and improve instructional strategies to make it “more effective, efficient, interesting/motivating, usable, and acceptable” (Dick & Carey, 2009).   The phases of formative evaluations include expert review, one-to-one, small group, and field tests.  Usually the phases go in order, “although expert review and one-to-one are often carried out at the same time” (Dick & Carey, 2009).  In the expert and one-to-one evaluations, an expert determines the strengths and weaknesses of the rough draft, or one learner reviews the instruction to provide the evaluator with observations, notes, and questions during and after the instruction.  In a small group, “the evaluator tries out the unfinished instruction with a group of learners and records their performances and comments” (Dick and Carey, 2009).  In a field test, “the evaluator observes the instruction being tried out in a realistic environment with a group of learners” (Dick and Carey, 2009). 

2.     Describe your instruments used in a formative evaluation.

In order to determine instruments to use in a formative evaluation, they should “measure the acquisition of the skills, knowledge, or attitudes you are looking for” (Dick & Carey, 2009).  The instruments used in the formative evaluations included a Google Docs Survey, interviews with the technology department, analysis of STaR chart results, and observations.  I will collect input from the technology department, administrators, teachers, students, and community members.   

3. 
Collect data according to a formative evaluation plan for a given set of technology plan or instructor presentation.

“Data may describe attitudes, abilities, capabilities, status and characteristics of people, processes, curricula and other soft items, hardware, equipment, budget, finances, and other entities” (Anderson, 1996). 

Technology:
Technology objectives and goal will be assessed through interviews, observations, and survey results of teachers, students, and parents.  I will work with the technology department to determine the feasibility of the results and how we can possibly accomplish the goals.

Funding:
I will interview the administrators, business department, and grant writer to determine the availability of funds and possible methods of gaining additional funds.  In addition a needs analysis and budget analysis will be performed.

Management:
In order to determine the success of certain workshops and professional development opportunities, teachers will receive a survey to express the need and wants of attending training.  Incentives for attending will be considered, and carefully managed by the technology department.  In addition, trainings will be announced via the school Google calendar and certificates will be uploaded and organized on Project Share. 

References:
Anderson, L. (1996). Guidebook for Developing an Effective Instructional Technology Plan (Version 2.0. ed.). Mississippi: Mississippi State University.

Dick, W., Carey, L., & Carey, J. O. (2009). The systematic design of instruction (7th ed.)
Upper Saddle River, NJ: Pearson.

8 comments:

  1. Which of the stages of formative evaluation do you feel to be the most important? I feel that one-to-one is very beneficial but I also feel that observation can be helpful when evaluating a program. I know that all stages are important but I was wondering what you thought would be the most beneficial to your program?

    ReplyDelete
  2. I believe the field tests are most important being the instruction is being used authentically with actual learners. In order to determine if it works well, it should be tested with a possible group of users. However, the one to one feedback is also important to determine a more detailed observation of the instruction.

    ReplyDelete
  3. Robyn,
    I have read that data can be collected for comparison groups. Often, it is helpful when evaluating a program to compare one group to another. Typically, comparisons are made between students or teachers who have been exposed to a particular initiative and students or teachers who have not been exposed (Quiñones & Kirshstein, 1998). How do you think comparison groups would be beneficial to collect data for your technology plan?

    Quiñones, S. & Kirshstein, R.(1998). An Educator’s Guide to Evaluating the Use of Technology in Schools and Classrooms. Retrieved from http://www.au.af.mil/au/awc/awcgate/ed-techguide/handbook2.pdf

    ReplyDelete
    Replies
    1. Comparison groups would be a great way to collect data because you are able to justify why or why not.

      Delete
  4. Robyn,
    I thought it was good that you included both interviews and surveys. In my own experience, people tend to rush through surveys, but give good feedback in interviews. Is this what you have experienced?

    Kayla

    ReplyDelete
    Replies
    1. Yes, many people tended to rush through the surveys, but they elaborated much more during an interview.

      Delete
  5. Robyn,
    In my opinion, each instrument used should be carefully designed and executed to make sure the data is accurate and valid. Inaccurate data can lead to an unsuccessful technology plan when implemented.

    Clark, D. (2010). Types of Evaluations in Instructional Design. Retrieved April 11, 2012, from http://www.nwlink.com/~donclark/hrd/isd/types_of_evaluations.html

    ReplyDelete
    Replies
    1. I agree. When designing any type of instruments dealing with data, you should always ensure they are as valid as possible.

      Delete

Note: Only a member of this blog may post a comment.