STUDENTS' UNDERSTANDING OF THE PROCEDURES
OF SCIENTIFIC ENQUIRY

Robin Millar

University of York, U.K.

Introduction: teaching science content and science method

The main aim of the science curriculum is to help students understand, and become able to use, the accepted scientific explanations of the behaviour of the natural world. In addition, most curricula also aim to develop students' understanding of the scientific approach to enquiry. Indeed, the very word 'understanding' implies that the student has not merely accepted a particular scientific explanation as valid but can explain their grounds for doing so. This requires some understanding of the methods used by scientists to establish the scientific view of phenomena. Implicit in many science curricula and much science education practice is the view that such understanding does not have to be made explicit: students will pick it up from their experience of seeing practical demonstrations and carrying out experiments by following instructions. Some science educators, however, have argued that explicit teaching about the methods of science is necessary, even (in some cases) that an understanding of science method (or science 'processes') is more important than a knowledge of science content.

Lack of consensus about scientific method: a problem for researchers in science education

Whilst a great deal of research has been carried out on students' understanding in many science content domains, much less work has been done on understandings of scientific enquiry. One reason for this difference is not difficult to discern: whereas there is general agreement within the scientific comunity about the scientific understanding of forces and motion, or electric circuits, or thermodynamics, there is little agreement amongst philosophers of science about whether a 'scientific method' exists and how, if at all, the scientific approach to enquiry can be characterised. This lack of consensus matters because research into students' ideas in science is essentially normative, that is, it uses the accepted scientific view as a 'template' and seeks to describe students' ideas in relation to it. Even in studies which do not set out specifically to compare students' understandings with the accepted scientific view, this view informs the initial analysis of the content of the domain and enables the researcher(s) to decide which ideas to probe. When we consider understandings of scientific enquiry, on the other hand, there is no agreed understanding to tell us which questions we should ask or which student behaviours we should observe. We lack an agreed 'map' of the domain of interest.

This, of course, may also explain why an understanding of the methods of scientific enquiry is often treated as a tacit aspect of science learning. By assuming that students will pick up the necessary understanding from their experiences in the science classroom and laboratory, we avoid the need to specify exactly what this learning entails. To appreciate the problem more clearly, let us consider for a moment what it would mean to develop students' understanding of scientific methods of enquiry and their ability to use these methods in their own investigations. We would be aiming, through our teaching, to help students become more 'expert' in selecting productive questions to investigate, designing suitable experiments to collect data which bear on these questions, making a planned series of observations or measurements with due attention to accuracy, validity and reliability, analysing and interpreting these data to reach a conclusion which is supported by the data, and being able to evaluate the quality of the support which their evidence gives to their conclusion. (The terms 'productive' and 'suitable' in the account above mean, of course, as seen from the perspective of science.) All the decisions involved clearly depend, to a very appreciable extent, on the students' science content knowledge in the domain concerned. So too do many of the specific 'tactics' or design features which are the marks of 'quality' in practical investigations in particular science domains: taking steps to reduce heat losses in thermodynamics investigations, deciding where to place the ammeter and voltmeter in a measurement of electrical resistance, and so on. As a result, some would question whether it makes much sense to talk of a general 'scientific approach to enquiry' or 'scientific method'.

On the other hand, it seems clear that students, especially those who continue the study of science to more advanced levels, do become better at designing suitable investigations and at carrying them out. They appear to have acquired some understanding of general characteristics of the scientific approach to enquiry, which they can then apply to new investigations they are asked to carry out. Much of this may, of course, derive from their growing science content knowledge. But some part of it may reflect a growing understanding of the procedures of scientific enquiry - an understanding of key ideas about systematic enquiry in the scientific mode which can be applied to many investigations in different areas of science.

The challenge, then, for science educators interested in this aspect of science education is greater than for those who set out to teach science content or to research students' understandings of science content domains. They must first produce a framework, or model, of the scientific approach to enquiry, on which to base their teaching approach or their empirical research on what students understand and how this affects the things they actually do, or can do. In the remainder of this chapter, I want to consider some of the ways in which researchers have tried to represent the methods of scientific enquiry, briefly discussing some of their main findings. In many cases, however, the value of these findings in helping us to understand students' learning or to take decisions about curriculum depends on the validity of the conceptual framework within which the work was planned and carried out. So my overarching aim is to draw some conclusions about the sorts of models which may be able to sustain a progressive research programme, and hence inform teaching, in this area.

The scientific approach to enquiry: educational models

1 The scientific approach as integration of 'process skills'

Some science educators and science courses, from a range of countries, have argued that the scientific approach to enquiry can be thought of as a set of 'processes': observing, classifying, hypothesising, inferring, predicting and so on. One well-known example, the course Science - A Process Approach (SAPA) (AAAS, 1967), was based on Gagné's analysis of the processes of science and of learning. Several science courses in the UK in the 1980's also followed this line, some using the processes (rather than science content) to structure the programme, and seeing learning primarily in terms of the development of pupils' 'process skills', that is their ability to carry out these processes in a range of contexts. Materials to assess 'process skills' were also developed. The characteristic of all these approaches is that they portray science method as a set of discrete 'thinking skills', which can be practised and developed separately before being combined to tackle more demanding problems.

Many small-scale evaluation projects were carried out on SAPA and other similar programmes. Bredderman (1983) and Shymansky, Kyle and Alport (1983) attempt to synthesize the findings of many such studies on several activity-based elementary science programmes of this period in the USA; they conclude that research shows some gains in various aspects of student learning. The individual studies, however, vary enormously in scale and approach, and in many cases the validity of both data and conclusions is questionable. Tests of students' learning can be criticised as too similar in structure or content to those used in teaching, so that successful performance reflects drilled responses or rote recall, rather than understanding - a perennial problem for any studies of transfer of learning from one context to another.

Problems with the 'process approach', however, run deeper than the issue of empirical support for its claims of effectiveness. The 'process' view of science method has been strongly criticised on epistemological grounds (for example, by Finlay, 1983; Millar and Driver, 1987; Hodson, 1990). Its view of scientific enquiry as beginning with unbiased observation, followed by classification of observations, leading to the emergence of hypotheses (in the form of generalisations or explanatory models) is a strongly (one might even say, naively) empirical, and inductive one, which receives little support from contemporary philosophy of science. Classroom studies demonstrate clearly the influence of prior ideas on observation (Hainsworth, 1956; Gott and Welford, 1987). The problems of discovery learning, in which the inductive approach is taken to its logical conclusion, are well documented (Atkinson and Delamont, 1977; Wellington, 1981; Harris and Taylor, 1983). Nor is the hypothetico-deductive element of the process approach without its problems. School laboratory experiments are not severe tests, in the Popperian sense, of the accepted scientific view (or even of children's own explanations).

The process approach has also been criticised on pedagogic grounds - that the ability to observe, classify, hypothesise and so on is something which every child possesses from infancy (Millar and Driver, 1987). If so, it is a mistake to believe that these 'process skills' need to be taught. Children's ability to use them, however, depends on the extent and confidence of their knowledge of the contexts they are asked to work on. This would explain, for instance, the finding that performance of tasks requiring these 'process skills' is strongly context-dependent (Song and Black, 1991; Lock, 1993).

The process approach is not, therefore, a sound basis for curriculum planning, nor does the analysis on which it is based provide a productive framework for research.

2 The scientific approach as logical strategy

One of the characteristics of scientific thinking is the commitment to logical reasoning in relating evidence and explanation. This, together with the use by Piaget of scientific contexts for his studies of young people's reasoning, have led some to identify 'scientific thinking' with the kinds of 'logical thinking' which Piaget came to see as characteristic of formal operational thought (Inhelder and Piaget, 1958). One indication of formal operations in Piaget's view was an understanding of the need to control variables in experiments with several independent variables, if valid conclusions are to be drawn. Many studies have been carried out by science educators to explore students' ability to control variables in multivariate tasks, and to evaluate the success of various approaches for teaching this. Lawson (1985) provides a comprehensive and detailed review. In general terms, research shows that many school students have difficulty in planning multivariate experiments and in interpreting results from such experiments, and that performance improves with age (Wollman, 1977; Karplus et al., 1979). As in other science domains, children's prior ideas and intuitions are important: the idea of 'fairness' in making comparisons is readily grasped by many children from age 7-8 onwards. Wollman and Lawson (1977), however, note that this basic idea 'does not usually develop spontaneously into a clear, generally applicable procedure [for planning experiments] as witnessed by the numerous studies of adolescents and adults' (p. 57). Levels of performance are also significantly influenced by the content and context of the task (Linn, 1980; Linn et al., 1983; Song and Black, 1992), and students' perform differently on tasks concerning 'natural experiments' (accounts of observed events occurring in an everyday setting) as compared to laboratory experiments (Kuhn and Brannock, 1977).

Many studies have also been carried out to evaluate teaching interventions intended to improve students' performance. Kuhn and Angelev (1976) found that practice in solving problems about control of variables led to improvement, but that explicit discussion of the solutions added little extra. Rowell and Dawson (1984) reported significant achievement gains using a teaching approach based on a general solution procedure, but again noted a strong influence of context on performance.

As with research on science 'processes', a central problem for work on teaching control of variables is to devise post-instruction test items which are sufficiently different from those used in teaching to convince us that some transfer of understanding has occurred, whilst not making the difference so great that a null result is inevitable. The extent to which individual studies achieve this is, inevitably, a matter of judgement. Taken as a whole, the literature suggests that students' performance is likely to improve with age and with exposure to tasks requiring this form of reasoning, and may be further enhanced by specific interventions if these are carefully planned. Both before and after any targeted teaching there is likely to be considerable variation in performance across contexts and between 'natural' and planned experiments.

The large surveys of student performance carried out in the UK by the Assessment of Performance Unit (APU) in the late 1970s and early 1980s were based on an analysis of science performance which emphasised the ability to carry out a scientific investigation, seeing this as requiring a synthesis of all the sub-components of performance. The investigation tasks which the APU used were of the control of variables type and this, in turn, strongly influenced the way in which an investigative component was incorporated into national examinations at age 16 and thence into the English National Curriculum (DES/WO, 1989). The APU research (APU, 1987), and later work specifically related to the National Curriculum (Gott and Duggan, 1995), corroborates the principal findings of the earlier work reported above, showing that students find tasks involving continuous variables appreciably more demanding than those involving categoric variables (comparisons), and that performance is strongly influenced by the science content of the investigation task and the context in which it is set (everyday or laboratory). Students' procedural knowledge - the APU's term for the understandings related to investigation performance - appears to account for only a small part of the observed variation in performance across tasks, with science content knowledge and informal knowledge of the context being much more significant. A similar result is reported by Erickson et al. (1991) from a major survey of students' performance in British Columbia.

A more general criticism, however, of the APU's, and later the English National Curriculum's, approach is that it limits 'scientific investigations' to tasks about the inter-relationships of a number of variables. Whilst the procedure of conducting and interpreting a controlled experiment is important in all the scientific and technological disciplines, there is surely more to scientific investigation than this.

A rather different kind of outcome is reported by Schauble and her co-workers (Schauble et al., 1991) from a study of students carrying out control of variables tasks. They see students as moving from an 'engineering approach', in which variables are altered to optimise an effect, towards a 'scientific approach', in which the inter-relationship of variables is explored. This places the emphasis not on the students' technical competence in manipulating variables, but on their understanding of the purpose of doing so. The findings suggest that students may need to be helped to understand the goals of investigative work in science if they are to perform as we would wish.

3 The scientific approach as problem-solving

One problem with both the 'process' and 'control of variables' approaches is that they are fundamentally 'algorithmic' in orientation: they portray science investigations as following an invariant template. They imply that there is a 'scientific method', rather than something rather looser and more flexible which we might term a 'scientific approach' to enquiry. The 'control of variables' approach achieves this by reducing the scope of what counts as a 'scientific investigation', and implicitly (perhaps unintentionally) adopting a strongly empiricist view of scientific knowledge, in which theoretical constructs (the variables deemed to be relevant) 'emerge' from the situation rather than being imposed on it by the investigator's prior understandings.

Some researchers, working from a cognitive science perspective, have treated the scientific approach as a form of problem-solving. Klahr and Dunbar (1988), for example, asked university students to investigate the function of one of the control buttons on the programmable robotic toy 'Bigtrak'. They analysed detailed records of the sequence of investigations carried out by each student to solve this problem, and argued that this can best be understood as a search through two memory domains: the domain of possible experiments, and the domain of possible hypotheses. Klahr and Dunbar criticise previous work by cognitive scientists on scientific reasoning, on the grounds that the tasks set are poor analogies for real situations where scientific thinking is required. Their own study can, however, be criticised on similar grounds: the task carries little conceptual load; the range of possible experiments is quite narrowly bounded; the outcome of each experiment is fairly clear cut; and the students know that the button they are investigating does have a unique and fairly simple function, which is known to someone. None of these is likely to be the case for a genuine science 'problem'.

Nonetheless, it is useful, I think, to see the task of tackling a science investigation as involving some kind of search through a 'problem space'. Ideas are drawn from long term memory, triggered by aspects of the problem being tackled. In designing an investigation, the investigator chooses 'tools' from his or her 'toolkit'. Not all are needed for any given investigation; skill resides in making the right choices and knowing how to use the tools required. Millar (1990) proposes a model of this sort, in which procedural understanding is divided into three categories: general cognitive skills (such as observing, classifying, and so on), practical techniques (such as knowing how to use various measuring instruments) and enquiry tactics (such as knowing to repeat measurements to improve their reliability). He argues that the first category cannot be taught, but are general features of cognition which all children possess. The other categories can be taught, but their selection and linkage into a strategy for tackling any given investigation is not simply a matter of following a 'set of rules'. Erickson (1994) has used this framework to analyse the performance of students on an investigative task involving magnets. Amongst other findings, he shows the influence of students conceptual knowledge of magnetism on their choices of investigation procedure.

A similar model (Figure 1), emphasising the selection of relevant ideas from memory, is proposed by the PACKS project (Millar et al., 1994) for interpreting data on students' performance of seven science investigation tasks.
 

Figure 1

Figure 1 Designing a science investigation: a simple model
 
 

Their analysis of students' responses led to an elaboration of this basic model, identifying four specific aspects of understanding linked to different stages in the progress of an investigation (Figure 2).
 
 

 
Figure 2
 
 
 
Figure 2 Relating understandings to actions in carrying out a science investigation: the PACKS model

One of these, understanding of the relevant science content, has been shown by many studies to influence performance significantly. It is also easy to accept that ability (and skill) in using measuring equipment and other apparatus influences performance. A third category, labelled 'frame', develops Schauble's idea of 'approach', drawing attention to the influence of an understanding of the purpose of an investigation task on its interpretation by students, and hence on their actions. The fourth aspect, an understanding of the nature of empirical evidence (that is, an understanding that measurements are subject to error, of how to reduce this, and of how to assess the reliability of the data collected), had a particularly strong influence on the overall quality of students' performance on the PACKS investigation tasks. (This aspect of procedural understanding is discussed further in section 4 below.)

4 The scientific approach as the collection of empirical evidence

The three perspectives discussed above emphasise the ability to design and execute a strategy for tackling a given investigation. It may, however, be more productive to shift the emphasis from this essentially creative aspect of investigating and on to the stage of using data as evidence to justify conclusions. The distinction is similar to that made by philosophers between the context of discovery and the context of justification of scientific knowledge. The former is much more difficult to describe and explain - indeed there may be little we can say about it at all. The same may apply to teaching students how to carry out science investigations: it may be more feasible to teach them how to evaluate their data and present justifications to support conclusions, than to teach them how to tackle new tasks. From this sort of perspective, some researchers have explored students' understanding of measurement, data collection, and the interpretation and use of data as evidence. The PACKS project, discussed above, found a lack of understanding of ideas about the collection and evaluation of empirical data to be a major weakness in many students' work. Using a survey instrument to collect students' responses to diagnostic questions involving the interpretation and evaluation of data, Lubben and Millar (forthcoming) propose a sequence of levels of understanding of measurement in science. At the lowest level, students see measurement as unproblematic: a careful measurement yields the 'true' value. At an intermediate level, students appreciate the possibility of error, and may know that repeating measurements is a way to improve their results, but still consider that a value can only be assessed by using an external authority (a teacher, or a data book). The highest level involves understanding how the variation in repeat readings can be used to assess the 'trustworthiness' of a measurement. Séré and her colleagues (1993) report on understandings of similar ideas at a more advanced level amongst university students.

The significance of this sort of understanding is very clearly shown by a recent study (Bailey and Millar, 1996) in which students (aged 11-16) were asked to draw conclusions from tables of data from multivariate experiments. In some questions, 'ideal' results were presented, with only one measurement of the dependent variable for each combination of values of the independent variables, and showing no change in the dependent variable when an independent variable which has no effect is altered. A parallel question presented more realistic data, giving three repeat readings for the dependent variable each time, and showing small changes in the average value of the dependent variable (but within the range of repeat readings) when an independent variable which has no effect is altered. Students' ability to draw the intended conclusion and to explain their reasoning was markedly lower on the second type of question, and correct answers all came from students who had got the corresponding question on 'ideal' data right. This suggests that understanding how to assess whether small differences are evidence of a real effect or are simply due to experimental uncertainty poses an additional, and quite significant, problem for students on top of the requirement to reason logically about variables.

Teaching (and researching into) students understandings of scientific enquiry: a way forward

As I pointed out earlier, in most science content domains, there is broad agreement about the kinds of understandings we are aiming to develop in school science, and the kinds of tasks we want students to be able to carry out successfully. As regards science enquiry procedures, there is less agreement about both of these. A first requirement, therefore, is greater clarity about what we want our students to be able to do: if we want them to become better at carrying out a scientific investigation for themselves, then we need to agree about the kinds of task which will count as 'scientific investigations'. Then we may be able to analyse the types of understanding required to carry out such tasks, and develop models to link aspects of understanding to aspects of performance. At present, the model developed by the PACKS project (Figure 2) is the most detailed available. An important feature of this model is that it does not portray scientific enquiry as 'rule governed'. Instead it sees it as involving a search of available knowledge in four sub-categories. More work, however, is needed to test and improve this model. We need to know more, for example, about the kind of memory elements recalled: are they isolated facts, or principles, or are they, as may be more likely, larger 'scripts' drawn from everyday or previous school experience? And how can we help students access those elements of their knowledge which are salient and useful in responding to a given task?

From a teaching point of view, I think it is helpful to see performance in practical investigation tasks as substantially knowledge-based (rather than, for instance, to think of it as showing 'skill', which is a very loosely defined term). Many studies have shown that science content knowledge is very important: the more you know about the science ideas which relate to an investigation, the better (from a scientific perspective) your investigation is likely to be. But other areas of understanding are also important. Amongst these are:

a clear understanding of the purpose of the investigation (in science, directed towards explanation rather than optimisation of an object or an effect)

an understanding of the idea of a variable, and of control of variables in multivariate experiments

an understanding of the problem of collecting valid and reliable data, and of how to assess the validity and reliability of the data one has collected.

This teaching agenda also generates its own research agenda, into students' understandings and how these develop, and into the effectiveness of specific kinds of teaching intervention.

Finally, it is worth noting that these understandings are not only important in the context of student investigations. They are also essential in teaching programmes which do not involve students in designing and carrying out science investigations. Unless students understand these key ideas about scientific enquiry, they have a rather weak basis for engaging in class discussions about the interpretation of data from practical exercises and teacher demonstrations intended to illustrate, and provide warrants for accepting, established science ideas. Procedural understanding is not an optional extra; rather it underpins the teaching and learning of science content.

References

American Association for the Advancement of Science (AAAS) (1967). Science - A Process Approach. Washington, DC: AAAS.

Assessment of Performance Unit (APU) (1987). Assessing Investigations at Ages 13 and 15. Science Report for Teachers: 9. London: DES/WO/DENI.

Atkinson, P. and Delamont, S. (1977). Mock-ups and cock-ups - the stage management of guided discovery instruction. In P. Woods and M. Hammersley (eds.), School Experience: Explorations in the Sociology of Education (pp. 87-108). London, Croom Helm.

Bailey, S. and Millar, R. (1996). From logical reasoning to scientific reasoning: students' interpretation of data from science investigations. Science Education Research Paper 96/01. Department of Educational Studies, University of York.

Bredderman, T. (1983). Effects of activity-based elementary science on student outcomes: a quantitative synthesis. Review of Educational Research, 53 (4), 499-518.

Department of Education and Science/Welsh Office (DES/WO) (1989). Science in the National Curriculum. London: HMSO.

Erickson, G., Bartley, R.W., Blake, L., Carlisle, R.W., Meyer, K. and Stavy, R. (1992). British Columbia assessment of science 1991 technical report II: Student performance component. Victoria, B.C.: Ministry of Education and Ministry Responsible for Multiculturalism and Human Rights.

Erickson, G. (1994). Pupils' understanding of magnetism in a practical assessment context: the relationship between content, process and progression. In P. Fensham, R. Gunstone and R. White (eds.), The Content of Science (pp. 80-97). London: Falmer.

Finlay, F. (1983). Science processes. Journal of Research in Science Teaching, 20 (1), 47-54.

Gott, R. and Duggan, S. (1995). Investigative Work in the Science Curriculum. Buckingham: Open University Press.

Gott, R. and Welford, G. (1987). The assessment of observation in science. School Science Review, 69 (247), 217-27.

Hainsworth, M.D. (1956). The effect of previous knowledge on observation. School Science Review, 37, 234-42.

Harris, D. and Taylor, M. (1983). Discovery learning in school science: The myth and the reality. Journal of Curriculum Studies, 15 (3), 277-89.

Hodson, D. (1990). A critical look at practical work in school science. School Science Review, 71 (256), 33-40.

Inhelder, B. and Piaget, J. (1958). The Growth of Logical Thinking from Childhood to Adolescence. London: Routledge and Kegan Paul.

Karplus, R., Karplus, E., Formisane, M. and Paulsen, A.C. (1979). Proportional reasoning and control of variables in seven countries. In J. Lochhead and J. Clement (eds.), Cognitive Process Instruction (pp. 47-103). Philadelphia, PA: Franklin Institute Press.

Klahr, D. and Dunbar, K. (1988). Dual search space during scientific reasoning. Cognitive Science, 12 (1), 1-48.

Kuhn, D. and Angelev, J. (1976). An experimental study of the development of formal operational thought. Child Development, 47, 697-706.

Kuhn, D. and Brannock, J. (1977). Development of the isolation of variables scheme in experimental and 'natural experiment' contexts. Developmental Psychology, 13 (1), 9-14.

Lawson, A.E. (1985). A review of research on formal reasoning and science teaching. Journal of Research in Science Teaching, 22 (7), 569-617.

Linn, M.C. (1980). Teaching students to control variables: some investigations using free choice experiments. In S. Mogdil and C. Mogdil (eds.), Towards a Theory of Psychological Development (pp. 673-97). Windsor, NFER.

Linn, M.C., Clement, C. and Pulos, S. (1983). Is it formal if it's not physics? (The influence of content on formal reasoning). Journal of Research in Science Teaching, 20 (8), 755-70.

Lock, R. (1993). Assessment of practical skills. Part 2. Context dependency and construct validity. Research in Science and Technological Education, 8 (1), 35-52.

Lubben, F. and Millar, R. (forthcoming). Children's ideas about the reliability of experimental data. International Journal of Science Education, in press.

Millar, R. (1990). A means to an end: The role of processes in science education. In B. Woolnough (ed.), Practical Science (pp. 43-52). Milton Keynes: Open University Press.

Millar, R. and Driver, R. (1987). Beyond processes. Studies in Science Education, 14, 33-62.

Millar, R., Lubben, F., Gott, R. and Duggan S. (1994). Investigating in the school science laboratory: conceptual and procedural knowledge and their influence on performance. Research Papers in Education, 9 (2), 207-248.

Rowell, J. A. and Dawson, C.J. (1984). Controlling variables: Testing a programme for teaching a general solution strategy. Research in Science and Technological Education, 2 (1), 37-46.

Schauble, L., Klopfer, L.E. and Raghavan, K. (1991). Students' transition from an engineering model to a science model of experimentation. Journal of Research in Science Teaching, 28 (9), 859-882.

Séré, M.-G., Journeaux, R. and Larcher, C. (1993). Learning the statistical analysis of measurement error. International Journal of Science Education, 15 (4), 427-438.

Shymansky, J.A., Kyle, W.C. and Alport, J.M. (1983). The effects of new science curricula on student performance. Journal of Research in Science Teaching, 20, 387-404.

Song, J. and Black, P. J. (1991). The effect of task contexts on pupils' performance in science process skills. International Journal of Science Education, 13 (1), 49-58.

Song, J. and Black, P. J. (1992). The effect of concept requirements and task contexts on pupils' performance in control of variables. International Journal of Science Education, 14 (1), 83-93.

Wellington, J.J. (1981). 'What's supposed to happen, sir?' - some problems with discovery learning. School Science Review, 63 (222), 167-73.

Wollman, W. and Lawson, A.E. (1977). Teaching the procedure of controlled experimentation: A Piagetian approach. Science Education, 61, 57-70.

Wollman, W. (1977). Controlling variables: Assessing levels of understanding. Science Education, 61, 371-83.
 
 

********************************************
 
SectionC4,  Student's Understanding of the Procedures of Scioentific Enquiry   from: Connecting Research in Physics Education
with Teacher Education
An I.C.P.E. Book © International Commission on Physics Education 1997,1998
All rights reserved under International and Pan-American Copyright Conventions
 

 Return to the Table of Contents