пятница, 2 марта 2012 г.

Laptops in the classroom: Do they make a difference?

ABSTRACT

In 1996 the College of Engineering at the University of Oklahoma started to require all incoming students to have a laptop computer equipped with a wireless Internet card. Because ofa pilot study and a voluntary phase-in over the first two years, two groups of students moved through the curriculum-those with and those without laptops. During 1998 and 1999, when these students entered their junior year, we offered two sections of a third-year water resources course: one for students who owned laptops and one "traditional" section for those who did not own laptops. We assessed student performance to evaluate if the laptops helped improve student learning. Although not a perfectly controlled experiment (i.e., the student groups were different), the two sections were uniform in terms of course content and assignments. Because of their inherently large standard deviations, class metrics (grades) are not conclusive, but they do indicate that the laptop students performed slightly better than the non-laptop students, even though their composite grade point average entering the course was lower. Evaluations do clearly show that, when the technology is used properly and when class time is not spent resolving technical problems, the laptop students had a more positive learning experience.

I. INTRODUCTION

A growing number of engineering departments are requiring students to purchase a laptop computer, often equipped with wireless Intemet connections [2, 4, 6, 7, 11]. This so-called tetherless computing offers a number of challenges and opportunities, including the ability to turn any classroom into a networked computer lab. The College of Engineering (CoE) at the University of Oklahoma (OLD implemented such a program, starting in 1996 with a pilot study, moving to a voluntary phase-in in 1997, and then mandating all incoming students to purchase a laptop in 1998. A 1995 CoE report [51 identified the following reasons for a laptop program (vs. desktop computer labs):

* greater access to computers;

* increased number of networked computer labs;

* better use of resources;

* more incentive for faculty to use technology in education; and

* more opportunities for students to learn the latest technology.

In this paper, our interest is not in the "practicality" of such a move, but rather in evaluating the impact of wireless laptops on the educational process. In other words, "Do They Make a Difference (in student learning)?"

The step implementation resulted in two sections of a junior-- level water resources course being offered in both Fall 1998 and Fall 1999: one section for students with laptops ("laptop section"), and one section for students without a laptop computer ("non-laptop section"). We used this opportunity to conduct a "controlled" experiment to evaluate the efficacy of wireless laptops in the classroom. Herein, we describe the course and evaluation methods and share observations and recommendations.

II. COURSE DESCRIPTION

The course under study, CE 3212-Environmental Engineering I (EEI), is a junior-level water resources course that is required for all civil and environmental engineering majors. EE1 is a core course in the Sooner City curriculum*; consequently, students are faced with designing the primary components of the city's water supply and sewer system. The EE1 course objectives are as follows:

"Introduce the student to selected topics in environmental engineering, including the following: hydrology, hydraulics, water supply and distribution, pump design, wastewater collection, and storm water management. Both a conceptual understanding and practical design tools will be emphasized. All major concepts will be presented in the context of designing infrastructure elements for a city."

Common elements for both the laptop and non-laptop sections included the following: use of permanent small groups**, homework assignments, quizzes, and exams. Also, the first two authors on this paper team taught both sections, but we do not mean tag team; rather, both were present in every class.

In both years, grades for the two sections were based on a weighted average of four components, as shown in Table 1. These grades are used as part of the evaluation protocol.

The third component in Table 1, RATs, stands for "Readiness Assessment Tests," which, as the name implies, are quizzes to assess the preparedness of students for new material. RATs are announced quizzes over a reading assignment (text and/or handouts); they are given at the beginning of each major topical area before any class time is spent on the material The same RAT is taken first by individuals and then by groups, followed by a class discussion. RATs are closed book and typically consist of true/false and multiple choice questions that test understanding of basic concepts. RATs prepare students for class, free up class time for more complex engineering issues, foster both self-learning and team building, and guide class discussion [12].

To help students see the "big picture," and to help motivate learning, both sections used the "just-in-time" learning paradigm, wherein the Sooner City design tasks drive the syllabus [13]. Design tasks included:

1) Estimate the water and sewer demands.

2) Design the water distribution system.

3) Design a pump station.

4) Design the sanitary sewer collection system.

5) Design the storm drainage system, including detention ponds.

Table 2 summarizes each sections' makeup. Enrollment in the laptop section was voluntary. Note that in both years the average GPA of the non-laptop section is higher than the laptop section.

In addition to the class metrics shown in Table 1, we tracked student perceptions and learning through mid- and end-of-semester evaluations conducted by OU's Instructional Development Program, and we analyzed results of the traditional teacher/course evaluation (TCE) forms.

III. USE OF TECHNOLOGY

Use of technology for the course can be broadly lumped into three categories: course materials, virtual experiments, and design software, which are discussed in the following subsections.

1) Web Access: All materials were placed on the Web under the umbrella of the Sooner City project (www.SoonerCity.ou.edu), including handouts, design criteria, design data, homework assignments, sample exam questions, and Web links. For the laptop section, students could download the documents or data, make electronic copies, and annotate electronically in class. For the non-laptop section, the information was displayed with an LCD projector, but students did not have direct access to the Web. Similarly, when using software for virtual experiments or for design calculations (see below), the non-laptop students watched an instructor demonstration, while the laptop students actively participated in the exercise. To keep the sections consistent, we conducted exactly the same software exercises in both sections.

2) Virtual Experiments with Spreadsheets: The second use of technology included virtual experiments on the laptops in lieu of physical experiments. Again, note that while the laptop students were able to conduct the virtual experiments real-time in class, we could only demonstrate it for the non-laptop section.

One example of such a virtual experiment is evaluating the headloss during pressure pipe flow. With a spreadsheet, students evaluated headloss, the dependent variable, as a function of three variables: pipe diameter, fluid velocity and pipe length. Since the equations are hidden in the spreadsheet, the students cannot "cheat" and look at the equation and must establish the correlation between variables based on cause and effect. As the student is looking for a particular relationship, he/she can enter a range of values into the spreadsheet and observe the resulting headloss on an automated plot. Data noise can be incorporated in the spreadsheet via a random number generator so that the graphs have data spread, as observed in actual experiments. Students can then analyze the trend and determine the headloss relationship to the independent variables. In this way, the student can "experimentally determine" the governing relationship, much as the empirical law was originally developed. Discussions center on the role/use of empirical laws and the value of dimensional analysis.

3) Commercial Design Software: The third use of technology centers around the use of commercial design software to complete the complex design tasks. For this course, we used Haestad Methods' suite of water resources tools, which is widely used in consulting, and thus exposes the students to tools used in private practice [14]. However, we did not advocate a "black box" approach; rather, students learned the theory and limitations behind the software, including checking calculations by hand. We then proceeded to teach them how to input data, run the model, and display and interpret results. Once again, in the laptop section, the software was taught via an active learning sequence, while the non-laptop students observed passively.

IV. ADSESSMENT

Assessment tools and results can be broken down into three sources of information: measures of student performance, standard course evaluations, and special questionnaires.

A. Class Performance Metrics

As in any research, trends should be evaluated based on quantitative data, grades in this case. While a logical approach, the inherent spread in class grades tends to obscure trends between the two sections. In spite of this limitation, several interesting observations can be made.

We compared metrics for individual RATs, group RATs, homework, class participation, time to complete homework, midterm and final exams and final grade. Figure 1 shows 1998 results where it can be observed that the laptop section has consistently higher scores, although the standard deviation of the data precludes this from being statistically significant. However, this trend is all the more encouraging as we recall that the laptop section had a lower overall GPA entering the course.

The laptop class outperformed the non-laptop section in two areas: class participation (higher) and time needed to do the homework (less). The laptop class was more interactive, as students conducted sensitivity analyses and virtual experiments and worked on group projects. Since the students obtained hands-on experience with software during class, their homework efficiency increased outside of class-the time to complete the homework for the laptop section was an average of 4.5 hours per assignment versus 5.6 hours for the non-laptop section. Figure 2 shows similar trends for 1999.

Median scores for 1998 and 1999 (not shown) follows a similar pattern as the means, viz, the laptop section generally had a higher median, but the only metric in which it is statistically higher is the 1998 class participation score. For this reason, it is instructive to look at other comparison metrics to evaluate the two sections, as described in the next section.

B. Standard TCE (Teacher/Course Evaluation) Forms

TCE forms are required by OU and are administered by student volunteers near the end of every semester. A five-point scale quantifies responses with 1 corresponding to "strongly agree" and 5 corresponding to "strongly disagree."

In 1998, the arithmetic average of all questions showed that the laptop section was 5.6 percent lower (more negative) than the non-- laptop section. In 1999, the average of the laptop section was 34 percent higher (more positive) than the non-laptop section. Based on these metrics and the written comments, we can infer that in 1998, we were still very much on the learning curve in terms of how to best use the laptop to improve the learning environment. Plus, there were some technical glitches that arose during that first year that detracted from the class. By contrast, in 1999, materials and our techniques were better developed, which resulted in a much more positive learning experience. We believe that the 1999 data better reflects the students' attitude toward a well-run laptop course.

C. Special Mid-Semester and End-of-Semester Evaluations

The Director of OU's Instructional Development Program (and co-author of this paper) administered special questionnaires to students in both sections and in both years of the program. These were administered at mid-semester (for formative feedback) and at the end of the semester (for summative feedback)[1,3].

1) Mid-Semester Questionnaire: The mid-semester questionnaire asked students about their perceptions and reactions regarding three general topics: their learning, the specific teaching/learning procedures, and the use of laptops.

Students in the non-laptop section did not see the value of laptops while students in the laptop section did. When asked about their preference for courses with or without laptops, students in the two sections responded as follows:

Students who did prefer laptop sections made comments like the following:

* "When you have a question about a problem, you can take your laptop to the teacher and work it through. Otherwise you have to just ask the question and then go home and try to remember the solution."

* "Everything today is done on the computer. This may help me to be more marketable."

Students who did not prefer laptop sections made these comments:

* "Laptops don't help you learn the concept, just how to use the software. Time is wasted going over the software."

2) End-of-Semester Questionnaire: The responses on the end-of-- semester questionnaire were similar to those at mid-semester. Both groups liked the teaching procedures: the Sooner City design project, the team teaching, and working in small groups. Interestingly, though, the non-laptop students seemed to have more difficulty establishing the cohesiveness of their groups: there were a few more complaints about some students not carrying their share of the work or not coming to out-of-class meetings. The laptop users did not complain of these problems.

The one other general difference between the two sections was that the laptop students had a better understanding of the general value and importance of computers in both design work and in learning. When asked how useful the laptops were for enhancing the learning process (on a scale of 1-5, with 5 being high), 80% of the laptop students responded with a "4', or "5".

V. CONCLUSIONS AND RECOMMENDATIONS

Our two-year experiment has provided some insight into the question posed in the title: "Do Laptops Make a Difference?" The following list summarizes what we feel are the most significant results and observations.

* The instructors must commit the time and resources to make good use of the technology.

* While the large standard deviation inherent in class grades makes it difficult to be conclusive, the trend of the means and medians suggests that the laptop section performed a little better in nearly all aspects of the course, which is particularly noteworthy since the laptop section had a lower overall GPA (cf. Table 2). However, this is tempered by the fact that the student/faculty ratio for the laptop sections was one-half that of the non-laptop section, thus providing more one-on-one mentoring.

* Class dynamics were consistently better in the laptop section, which is reflected in the much higher class participation score.

* If nothing else, laptops in the classroom help students learn the latest technology (hardware and software).

* Virtual experiments and active learning exercises tap into the technology much more effectively than simply posting Web notes.

* While we have found the laptops to be effective tools for our classes, we believe that their efficacy is a function of both the subject matter being taught and the instructor's skills/interests.

ACKNOWLEDGMENTS

Support for this work was provided in part by funds from NSF (AC19623592 and EEC9872505), the OU College of Engineering, and the OU School of Civil Engineering and Environmental Science. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the funding agency.

[Reference]

REFERENCES

[Reference]

[1] Angelo, T.A., and K.P. Cross. 1994. Classroom Assessment Techniques: fl Handbook for College Teachers. 2nd revised edition. San Francisco: Jossey-Bass.

[2] Ayres, I. Lectures vs. laptops. N.Y Times, March 20, 2001.

[3] Bloom, B.S., J.T. Hastings, and G.F. Madaus. 1971. Handbook on Formative and Summative Evaluation of Student Learning. New York: McGraw-Hill.

[4] Chen, J.C., M. Ellis, J. Lockhart, S. Hamoush, C.E. Brawner, and J.G. Tront. 2000. Technology in engineering education: What do the faculty know and want?j Engineering Education. 89(3):279-283.

[5] Cheung, J., et al. 1995. Committee Report on Student-Owned PC. Submitted to Dean B. Crynes, College of Engineering, University of Oklahoma. 76 pp.

[6] Crynes, B.L. Universal student computer access: Requiring engineering students to own computers. J Engineering Education. 86(4): 301-304.

[Reference]

[7] Griffioen, J., W.B. Scales, and J.E. Lumpp, Jr. 1999. Teaching in real-time wireless classrooms. J Engineering Education. 88(4): 397-402.

[8] Howell, K.C. 1996. Introducing cooperative learning into a dynamics lecture class. J Engineering Education. 85(1): 69-72.

[9] Johnson, D.W., RT. Johnson, and K.A. Smith. 1991. Cooperative Learning. Increasing College Faculty Instructional Productivity. ASHE-- ERIC Higher Education Report No. 4. Clearinghouse on Higher Education, George Washington University, Washington, DC.

[10] Johnson, D.W., R.T. Johnson, and K.A. Smith. 1991. Active Learning. Cooperation in the College Classroom. Interaction Book Company: Edina, MN.

[11] Kadiyala, M., and B.L. Crynes. 2000. A review of literature on effectiveness of use of information technology in education. J Engineering Education. 89(2):177-189.

[12] Kolar, R.L., and D.A. Sabatini. 2000. Environmental modeling: A project-driven, team approach to the theory and application. Engineering Education. 89(2): 201-207.

[Reference]

[13] Kolar, RL., K.K. Muraleetharan, M.A. Mooney, and B.E. Vieux. 2000. Sooner City: Design across the curriculum. J Engineering Education. 89(1): 79-87.

[14] Meadows, M.E., and T.M. Walski (eds.). 1999. Computer Applications in Hydraulic Engineering. Third Ed. Haestad Press, 316 pp.

[15] Michaelsen, L.K., W.E. Watson, J.P. Cragin, and L.D. Fink. 1982. Team learning: A potential solution to the problems of large classes. Exchange: The Organizational Behavior Teaching Journal. 7: 13-22.

[Reference]

[16] Michaelsen, L.K. 1992. Team learning: A comprehensive approach for harnessing the power of small groups in higher education. To Improve theAcademy. 11:107-122.

[17] Michaelson, L.K., W.E. Watson, and R.H. Black. 1989. A realistic test of individual versus group consensus decision making,. J Applied Psychology. 74(5): 834-839.

[18] Mourtos, N.J. 1997. The nuts and bolts of cooperative learning in engineering. J Engineering Education. 86(1):35-38.

[Author Affiliation]

R. L. KOLAR

School of Civil Engineering and Environmental Science

University of Oklahoma

[Author Affiliation]

D. A. SABATINI

School of Civil Engineering and Environmental Science

University ofOklahoma

[Author Affiliation]

L. D. FINK

Instructional Development Program and Department of Geography

University of Oklahoma

[Author Affiliation]

AUTHOR BIOGRAPHIES

[Author Affiliation]

Randall L. Kolar is an Associate Professor in the School of Civil Engineering and Environmental Science at the University of Oklahoma, where he is also Associate Director of the Environmental Modeling/GIS lab and the Environmental and Groundwater Institute. He received his undergraduate degrees in Civil Engineering and Mathematics from the University of Idaho and his Ph.D. in Civil Engineering (Water Resources) from the University of Notre Dame. Research interests center on computational hydraulics/hydrology; in the educational field, he is very interested in alternative delivery techniques and bringing "real world" engineering into the classroom via the Sooner City project. In 2000, he received the ASEE Dow Outstanding New Faculty Award.

Address: Civil and Environmental Engineering Science, University of Oklahoma, 202 W. Boyd St., Room 334, Norman, OK, 73019-1024; telephone: 405-325-4267; fax: 405-325-4217; e-mail: kolar@ou.edu.

[Author Affiliation]

David A. Sabatini is Professor and Sun Oil Company Chair in the School of Civil Engineering and Environmental Science at the University of Oklahoma, where he is also the Director of the Environmental and Ground Water Institute and Associate Director of the Institute for Applied Surfactant Research. He joined the faculty at OU in 1989 after receiving his Ph. D. from Iowa State University, his MSCE from the University of Memphis, and his BSCE from the University of Illinois. Research interests include subsurface contaminant transport and remediation processes and enhanced engineering educational methods. Awards include the ASEE Dow Outstanding Young Faculty Award.

Address Civil and Environmental Engineering Science, University of Oklahoma, 202 W. Boyd St., Room 334, Norman, OK, 730191024; telephone: 405-325-4273; fax: 405-325-4217; e-mail: sabatini@ou.edu.

[Author Affiliation]

L. Dee Fink is the Director of the Instructional Development Program at the University of Oklahoma. He received his Ph.D. from the University of Chicago in Geography and Education. He has taught courses on world geography, college teaching, and in the College of Liberal Studies, an interdisciplinary program at Oklahoma. He is currently working on two books: one on team learning (a special form of teaching with small groups) and another on how to design courses for significant student learning.

Address: Instructional Development Program, University of Oklahoma, Hester Hall, Room 203, Norman, OK, 73019; telephone: 405-325-2323; fax: 405-325-7402; e-mail: dfink@ou.edu.

Комментариев нет:

Отправить комментарий